Google could easily have avoided the recent loss in confidence caused by ads placed alongside extremist content, by deploying AI technology. The fact is, they never wanted to because they had no incentive to do so – Google makes its revenues from advertising. To demonstrate how easy it is for them I predict that by the end of May 2017 Google will announce that they have “cracked the problem”.
I have worked with leading-edge B2C fashion retailers – some of the most demanding users of ecommerce – so I am fairly familiar with the techniques used to find and engage with potential customers, track them and analyse their behaviour. This technology can also be used to help solve the “Adtech problem”. It is core to Google’s business model.
Anyone who has read my blogs knows that I consider advertising to be a useless human activity and personally, I run an ad-blocker, use the Duckduckgo search engine (excellent) and a VPN or Tor, so I am fairly immune to advertising. I stopped using FaceBook years ago – they started serving me ads for Australian budgie smugglers, probably based on a photo someone had taken of me on the beach and the fact that I seem to be related to half of Australia. I felt sorry for the company paying for the PPC ads since I already own several pairs of budgie smugglers and don’t need to buy Australian ones. I only wear them (and pink polo shirts) to upset my daughters.
So, how will Google crack the problem? I started thinking about how I would go about it. I first got involved with AI in 1983 and founded and built an AI company using Lisp Machines. Many (most) of the AI techniques developed then (rules, neural networks, machine learning etc.), are still in use today but now benefit from modern computing hardware (and to a lesser extent software).
First, the problem is easily decomposed and rules written to dramatically reduce the number of videos that need a more detailed analysis. I’d expect the Google AI team to already have this well underway. The second stage is the more interesting problem and Google have some fantastic technology that they can deploy to solve this part of the problem. This paper outlines the work they have done to build special hardware to accelerate Machine Learning algorithms, specifically the Tensor processing Unit – in addition to ML several classes of problem – journey routing, supply chains – use pretty heavy tensor mathematics.
Google claims that it only took 22 days from design to deploy the TPU.
Several other interesting facts emerge – first, this is the technology Google has deployed to improve the vast improvement in machine translation they are delivering, second the small size of the machine learning algorithms. Another aspect is the size of the team required to produce this capability – 70 people. Big technical advances don’t come cheap.
Machine translation and continuous speech recognition have always been some of the most interesting problems in AI. Ray Kurzweil, Google’s head of engineering had already made a name for himself in the mid 1980s and his predictions make interesting reading.
The focus on low power consumption is another indication of the advances that have been made since the mid 1980s. We had quite a few Lisp Machines and they were power hungry beasts, we never needed any office heating!
Aside – Lisp Machines were wonderfully productive (20+ times more productive than other c1985 software development technology) but came at a price – $100k for a single user workstation. They were examples of “special purpose hardware” designed to run LISP, having a tagged architecture to assist with Lisp types and garbage collection.