Blog Technology

Google I/O 2019 was all about AI, Privacy and Accessibility •


At Google I/O 2019, the advances Google made in AI and machine learning have been put to use for enhancing privateness and accessibility.

I’ve attended Google I/O in individual solely as soon as. It was in 2014. I’ve been following this occasion from afar ever since, making it some extent to observe the keynote annually, making an attempt to determine where Google is headed – and how will that affect the business.

This weekend I spend a while going over te Google I/O 2019 keynote. Should you haven’t seen it, you’ll be able to watch it over on YouTube – I’ve embedded it here as properly.

The primary theme of Google I/O 2019

Right here’s how I ended my evaluation about Google I/O 2018:

The place are we headed?

That’s the large question I assume.

Extra machine studying and AI. Anticipate Google I/O 2019 to be on the same theme.

In case you don’t have it in your roadmap, time to see methods to match it in.

In some ways, this could easily be the top of this text as nicely – the tl;dr model.

Google received to the guts of their keynote solely in across the 36 minute mark. Sundar Pichai, CEO of Google, talked about the “For Everyone” theme of this event and where Google is headed. For Everyone – not only for the rich (Apple?) or the individuals in developed nations, but For Everyone.

The first thing he talked about on this For Everybody context? AI:

From there, every thing Google does is about how the AI analysis work and breakthroughs that they’re doing at their scale can match into the course they need to take.

This yr, that path was defined by the words privateness, security and accessibility.

Privacy because they are being scrutinized over their knowledge assortment, which is instantly linked to their enterprise mannequin. However extra so because of a current breakthrough that permits them to run accurate speech to text on units (extra on that later).

Safety because of the rising variety of hacking and malware assaults we hear about all the time. But more so as a result of the work Google has put into Android from all elements is putting them forward on competitors (assume Apple) based mostly on third celebration stories (Gartner in this case).

Apparently, Apple is attacking Google round each privateness and safety.

Accessibility because that’s the subsequent billion users. The bigger market. The best way to develop by reaching ever larger audiences. But in addition as a result of it matches nicely with that breakthrough in speech to text and with machine learning as an entire. And somewhat because of variety and inclusion which are huge words and concepts in tech and silicon valley today (and you have to appease the crowds and your personal staff). And in addition because it films properly and it really does benefit the world and individuals – though that’s secondary for corporations.

The large reveal for me at Google I/O 2019? Undoubtedly its advances in speech analytics by getting speech to text minimized sufficient to suit into a cellular gadget. It was the primary pillar of this present and for things to return in the future when you ask me.

Loads of the AI innovations Google is speaking about is around actual time communications. Take a look at the current report I’ve written with Chad Hart on the subject:

AI in RTC report

Occasion Timeline

I needed to know what’s essential to Google this yr, so I took a rough timeline of the occasion, breaking it down into the minutes spent on each matter. In each and each matter mentioned, machine learning and AI have been obvious.

Time spent Matter 10 min Search; introduction of latest function(s) 8 min Google Lens; introduction of latest function(s) – related to speech to text 16 min Google assistant (Duplex on the internet, assistant, driving mode) 19 min For Everyone (AI, bias, privateness+safety, accessibility) 14 min Android Q enhancements and improvements (software program) 9 min Next (house) 9 min Pixel (smartphone hardware) 16 min Google AI

Let’s put this in perspective: out of roughly 100 minutes, 51 have been spent instantly on AI (assistant, for everybody and AI) and the rest of the time was spent about… AI, although not directly.

Watching the event, I need to say it obtained me considering of my time on the university. I had a neighbor at the dorms who was knowledgeable juggler. Perhaps not skilled, but he did get paid for juggling occasionally. He was capable of juggle 5 torches or clubs, 5 apples (while eating one) and anyplace between 7-11 balls (I didn’t hold monitor).

One evening he comes storming into our room, asking us all to observe a new trick he was engaged on and simply perfected. We all seemed. And located it boring. Not because it wasn’t exhausting or impressive, however as a result of we all knew that this was most undoubtedly within his comfort zone and the issues he can do. Humorous factor is – he visited us right here in Israel a number of weeks again. My wife asked him if he juggles anymore. He stated a bit, and stated his youngsters aren’t impressed. How might they when it’s obvious to them that he can?

Anyhow, there’s no wow think about what Google is doing with machine studying anymore. It’s obvious that each yr, in each Google I/O occasion, some new innovation around this matter might be launched.

This time, it was all about voice and textual content.

Time to dive into what went on @ Google I/O 2019 keynote.

Speech to text on gadget

We had a glimpse of this piece of know-how late final yr when Google launched call screening to its Pixel 3 units. This capability permits individuals to let the Pixel reply calls on their behalf, see what individuals are saying utilizing reside transcription and determine find out how to act.

This was all carried out on system. At Google I/O 2019, this know-how was just added across the board on Android 10 to anything and all the things.

On stage, the explanation given was that the mannequin used for speech to text within the cloud is 2.5Gb in measurement, and Google was capable of squeeze it right down to 80Mb, which meant with the ability to run it on units. It was not indicated if this is for any language aside from English, which in all probability meant that is an English solely functionality for now.

What does Google achieve from this functionality?

  1. Quicker speech to textual content. There’s no have to ship audio to the cloud and get textual content back from it
  2. Means to run it with no community or with poor community circumstances
  3. Privacy of what’s being stated

For now, Google can be rolling this out to Android units and not just Google Pixel units. No point out of if or when this will get to iOS units.

What have they finished with it?

  • Made the Google assistant extra responsive (as a result of quicker speech to textual content)
  • Created system-wide automated captioning for every little thing that runs on Android. Anyplace, on any app


The origins of Google got here from Search, and Google determined to start out the keynote with search.

Nothing super fascinating there within the announcements made, in addition to the continual improvements. What was showcased was news and podcasts.

How Google determined to handle Face News and news coverage is now coming to look instantly. Podcasts at the moment are made searchable and higher accessible instantly from search.

Aside from that?

A brand new shiny object – the power to point out 3D models in search outcomes and in augmented reality.

3D AR models in search results

Nice, but not earth shattering. A minimum of not but.

Google Lens

After Search, Google Lens was showcased.

The primary theme round it? The power to capture textual content in actual time on pictures and do stuff with it. Often either textual content to speech or translation.

Google Lens looking at a menu

Within the screenshot above, Google Lens marks the really helpful dishes off a menu. Whereas nice, this in all probability requires every and every such function to be baked into lens, very similar to new actions must be baked into the Google Assistant (or expertise in Amazon Alexa).

This falls nicely into the For Everybody / Accessibility theme of the keynote. Aparna Chennapragada, Head of Product for Lens, had the next to say (after an emotional video of a lady who can’t learn utilizing the new Lens):

“The power to read is the power to buy a train ticket. To shop in a store. To follow the news. It is the power to get things done. So we want to make this feature to be as accessible to as many people as possible, so it already works in a dozen of languages.”

It truly is. Individuals can’t actually be a part of our world without the facility to read.

Additionally it is the one announcement I keep in mind that the number of languages coated was talked about (which is why I consider speech to textual content on gadget is English only).

Google made the case right here and in virtually each part of the keynote in favor of utilizing AI for the larger good – for accessibility and inclusion.

Google assistant

Google assistant had its share of the keynote with four principal announcements:

Google Assistant  features

Duplex on the internet is a better auto fill function for net varieties.

Subsequent era Assistant is quicker and smarter than its predecessor. There have been two most important features of it that have been really fascinating to me:

  1. It is “10 times faster”, most likely on account of speech to textual content on the telephone which doesn’t necessitate the cloud for a lot of tasks
  2. It works throughout tabs and apps. A demo was proven, the place a the lady instructed the Assistant to search for a photo, choosing one out and then asking the telephone to send it on an ongoing chat dialog just by saying “send it to Justin”

Google Assistant cross apps

Every year Google appears to be making Assistant more conversational, capable of handle more intents and actions – and understand much more of the context needed for complicated duties.

For Everyone

I’ve written about For Everyone earlier in this article.

I need to cover two more facet of it, federated studying and undertaking euphonia.

Federated Studying

Machine learning requires tons of knowledge. The more knowledge the higher the ensuing mannequin is at predicting new inputs. Google is usually criticized for accumulating that knowledge, however it wants it not only for monetization but in addition quite a bit for enhancing its AI models.

Enter federated learning, a option to study a bit at the edge of the network, immediately contained in the units, and share what gets discovered in a safe trend with the central model that’s being created within the cloud.

Federated learning @ Google

This was so essential for Google to point out and explain that Sundar Pichai himself confirmed and gave that spiel as an alternative of leaving it to the ultimate a part of the keynote where Google AI was discussed virtually separately.

At Google, this seems like an initiative that is only beginning its means with the first public implementation of it embedded as part of Google’s predictive keyboard on Android and how that keyboard is learning new phrases and developments.

Undertaking Euphonia

Undertaking Euphonia was additionally launched right here. This venture is about enhancing speech recognition models in the direction of arduous to know speech.

Here Google harassed the work and effort it is putting on accumulating recorded phrases from individuals with such problems. The primary problem right here being the creation or enchancment of a mannequin greater than anything.

Android Q

Or Android 10 – decide your identify for it.

This one was greater than anything a purchasing listing of options.

Statistics were given at first:

  • 2.5 billion lively units
  • Over 180 gadget makers

Stay captions was once more defined and launched, along with on-device learning capabilities. AI at its greatest baked into the OS itself.

For some purpose, the Android Q phase wasn’t followed with the Pixel one however slightly with the Nest one.

Nest (useful residence)

Google rebranded all of its sensible house units underneath Nest.

Whereas at it, the determined to attempt and differentiate from the rest of the pack by coining their answer the “helpful home” as opposed to the “smart home”.

As with the whole lot else, AI and the assistant took middle stage, as well as a new system, the Nest Hub Max, which is Google’s reply to the Facebook Portal.

Nest Hub Max announcement

The solution for video calling on the Subsequent Hub Max was built round Google Duo (clearly), with an analogous potential to auto zoom that Fb Portal has, no less than on paper – it wasn’t really demoed or showcased on stage.

Google Duo on Nest Hub Max

The rationale no demo was actually given is that this system will ship “later this summer”, which suggests it wasn’t actually prepared for prime time – or Google just didn’t need to spend more valuable minutes on it through the keynote.

Apparently, Google Duo’s current addition of group video calling wasn’t mentioned all through the keynote at all.

Pixel (telephone)

The Pixel section of the keynote showcased a new Pixel telephone gadget, the Pixel 3a and 3a XL. This can be a low value system, which tries to make do with decrease hardware spec by offering higher software program and AI capabilities. To drive that time house, Google had this slide to point out:

Pixel 3a vs iPhone X camera

Google is constant with its investment in computational images, and if the results are nearly as good as this example, I am bought.

The opposite good function proven was name screening:

Call screening on Android Q

The neet factor is that your telephone can act as your personal secretary, checking for you who’s calling and why, and additionally converse with the caller based mostly in your instructions. This obviously makes use of the identical innovations in Android around speech to text and sensible reply.

My current telephone is Xiaomi Mi A1, an Android One system. My subsequent one might be the Pixel 3a – at $399, it’s going to in all probability be one of the best telephone available on the market at that worth level.

Google AI

The last section of the keynote was given by Jeff Dean, head of He was also the one closing the keynote, as an alternative of handing this back to Sundar Pichai. I discovered that nuance fascinating.

In his part he discussed the advancements in pure language understanding (NLU) at Google, the expansion of TensorFlow, where Google is putting its efforts in healthcare (this time it was oncology and lung most cancers), in addition to the AI for Social Good initiative, the place flood forecasting was explained.

That crowning glory of Google AI in the keynote, taking 16 full minutes (about 15% of the time) exhibits that Google was aiming to impress and to give attention to the great they are making on the planet, making an attempt to scale back the rising worry issue of their power and knowledge assortment capabilities.

It was spectacular…

Next yr?

More of the same is my guess.

Google might want to discover some new innovation to build their event around. Speech to text on gadget is nice, particularly with the various use instances it enabled and the privateness angle to it. Unsure how they’d prime that next yr.

What’s sure is that AI and privacy will nonetheless be at the forefront for Google throughout 2019 and nicely into 2020.

A whole lot of the AI improvements Google is talking about is round real time communications. Take a look at the current report I’ve written with Chad Hart on the subject:

AI in RTC report