← Techstars Blog

This post originally appeared on blog.startupdigest.com.

The following is a guest post by Philip Alexander, CEO of Mentorial and curator of the Startup Digest HR & Employee Experience Reading List. This post originally appeared on Medium.

Sign up for Philip’s Reading List here and follow him on Twitter @philipdalex.


Reflections on the Playfair London AI Summit (July 1st) and Resolution Foundation Robotics Conference (July 4th).

There is No Inherent Trade-off Between Advancing AI and Privacy

There have been, and continue to be, a number of legitimate privacy concerns raised about AI. One example is the Netflix Prize from 2010. They asked researchers to build the best algorithm to predict which films users will like based on a training data set of over 100M show ratings from ~500K users. One group of researchers realized that if they cross-referenced this data with reviews on IMDB, they were able to identify who the specific users were.

Significant progress has been made since then, and applications from BBC iPlayer to smart meters in homes have created ways of using AI to make recommendations on your preferences without the software sharing your information. Privacy is, and should continue to be, something those in the AI field take seriously, particularly as it is applied to more and more areas of our lives.

 

Humans and AI Can Be Friends

On one level, AI research is taking inspiration from Biology and Neuroscience to help determine the next wave of breakthrough — from understanding how the brain learns to cortical networks.

In parallel, there is also progress in how humans and AI work together to solve real world problems, like Zooniverse. This is a platform for humans to help solve huge problems, often helping to feed AI models in the background, e.g. improving satellite imagery in natural disasters or translating old works of literature.

This speaks to the development of human:machine offering, a glance in to the future, with AI starting to remove more functional tasks from humans in the short term.

 

AI Has Applications You May Not Expect

It is cliché to state that the costs of creating and taking a new drug to market is incredibly high, but this is wonderfully illuminated when you consider that if you liquidated Alphabet (market cap $490BN) you would fund 40 new drugs with a runway of 18 months!

Using AI, Stratified Medical is scouring drug research data to identify where drugs may be able to be used to treat different conditions. This level of data analysis would never be possible without the assistance of AI.

 

The World is Adapting to AI

Barcodes and QR codes are not all designed for humans, but rather for machines. The impact of AI and robotics at present depends on the envelope it is allowed.

As companies and researchers are developing new forms of robot/AI human interaction, we are faced with major design challenges, both for the devices themselves and for the bounds in which they operate.

This leads to some really serious policy decisions having to be made quickly, as the consensus remains that technology is moving faster than legislation can keep up. In some specific areas, government has taken action, e.g. autonomous vehicles, but this is certainly not the case in every field we are seeing AI progress.

Potentially worse than no legislation is ineffective or poorly made legislation. As this paper by Goodman and Flaxman notes, the EU regulations on algorithmic decision-making and a “right to explanation” present serious challenges for a lot of AI companies. This would give EU citizens the right to have every algorithmic decision made about them explained. For companies which run black box algorithms for their AI, this could pose real challenges.

 

Partially-formed Thoughts:

Is AI different to the wheel?

The example typically cited by economists is that we have experienced technological advances before and labour has readjusted. Before the wheel, there were people employed to fulfill the more arduous tasks manually, who then found alternative employment as new technologies were introduced. As every new technology comes in, there is a movement in jobs, with re-skilling taking place.

Traditional economic theory goes as follows:

Company makes good X → invests in new technology → lowers cost of production → lowers cost to consumer → consumer has more disposable income → consumers buy the same amount of good X → consumers now have leftover money they spend on something else, call it good Y → as they buy more of good Y there are more staff required to make it.

Three points on this:

1. If the impact is geographically constrained, then it can have a devastating effect, e.g. factory closures, call centres, etc.

2. AI is limited to one purpose. AI applications that can be applied to many more areas than a wheel and can also improve itself. There may be a substitution effect taking place between different AI, rather than AI and humans.

3. New jobs aren’t being created at the same rate. A study by Frey found that the percent of people employed in new jobs was just 0.5%. This means we have to see people moving in to traditional jobs — which may not be going away in the short-term e.g. care giving, social work. The question is then, how do we reward these jobs more appropriately?

In references to job destruction, I think there is a nuance around jobs vs. tasks or roles being automated. If you look at certain tasks around scheduling, data summarization and research, there are definite applications of AI which will certainly remove tasks people have to do. With this extra time, they could focus on more creative and value-adding activities.


Thanks to Playfair Capital and Nathan Benaich for putting on the AI Summit and the Resolution Foundation for putting on such great events.

I’d love to get people’s thoughts on this, so please comment.

The post Reflections on the Playfair AI Summit & Resolution Foundation Robotics Conference appeared first on Startup Digest Blog.



Philip Alexander



To learn more about Startup Digest, visit us and get in touch.