05
Jul

The World of Artificial Intelligence: A Recap of EmTech Digital

This blog post was co-written by Ali Ishaq and Mark Villarrubia

Artificial Intelligence (AI) is officially introducing itself to the world. At the EmTech Digital conference in San Francisco, many of the leading representatives of AI came to talk theory and display their achievements in autonomous robotics and machine learning. The two-day event was split between machine learning on the first day, and mostly autonomous vehicles on the second day, a wave powered mainly by Google and Tesla.

Day 1:

On the first day, speakers such as Oren Etzioni from the Allen Institute for Artificial Intelligence; Peter Norvig, Google’s Director of Research; and Andrew Ng, Chief Scientist of Baidu touched upon some of the ethical aspects of artificial intelligence and spoke about their vision and AI’s future potential. The fear of job replacement and Terminator-style world domination prophecies were addressed but most importantly, the speakers talked about the innovations that have materialized from these ideas, and how they are currently being incorporated into the technologies we use today.

One critical use for AI is making judgments in gray areas, which traditional computing struggles with. According to Alan Packer, the Director of Engineering at Facebook’s Language Tech Department, people more often than not use idioms and slang in their online posts and comments, making translation difficult. Simple variations of “What’s up?” to “Sup” exemplify variations on a common phrase. AI uses machine learning on troves of data in order to parse meaning from human variability. This lets it pick up the slack where the traditional computational approach falls short.

CjLk_IiW0AARQ_D

Understanding the language of Facebook with Alan Packer of Facebook

In addition, AI has shown promise in identifying human categories. While humans might label parts of a picture a ‘face’ or ‘nose’, from a software perspective these are seemingly arbitrary groupings of pixels. While facial recognition has recently become common on the market, in the form of photo filter applications, other features of the world are harder to identify. Facebook and Pinterest demonstrated next-level machine-learning technology to identify features of pictures; Pinterest by letting users crop photos to focus on a particular object, and Facebook by labeling major objects in the picture. Pinterest’s application seemed to be intended as a user aid for finding similar objects; given the cropped picture, it would return similar pictures from its database, potentially revolutionizing the way we search and shop. Facebook’s application, on the other hand, was aimed at ease of access for the blind. In their example photograph, the algorithm identified four people, their skis, the snow they were standing on, and the trees in the background. Then, as a user moused over the picture, the program verbally announced the feature that was being hovered over. Whether it’s ease of access or exploring interests, machine learning promises to revolutionize the human experience.

The consensus at the conference was that the field of AI is in an extremely exciting place, with Baidu’s Andrew Ng going as far as calling AI “the new electricity”. According to Ng, this new electricity is in the process of displacing many jobs, but is also in the process of creating jobs, especially ones that are currently nonexistent. Who would have known 20 years ago that we would eventually have such massive amounts of data that we would need Data Scientists just to help us understand what they mean? Ng predicts the same is true with AI, and that entire industries will be born from this single idea of machine learning.

CjKmHZPXEAEyq-F

Panel on Artificial Intelligence

Another interesting concept that was touched on by Manuela Veloso from Carnegie Mellon University, and repeated throughout the conference, was one of Augmented Intelligence, or Intelligence Augmentation – the idea that robots will not be taking over human duties, but rather robots and humans will be working together and informing each other. This is a paradigm shift to what AI was envisaged as when the first wave of interest hit, where we would have robots doing our bidding. Not only is the threat of job displacement much less apparent in this paradigm, but the idea that machines will exist in order to enhance our abilities to effectively do our jobs is a much more attractive form of using AI.

This was demonstrated in the self-navigating robots developed by Dr. Veloso’s team. These “COBOTs” were able to sense their environment and navigate doors and other obstacles to perform a range of tasks. They use email, voice, and text to communicate with humans to help them complete their missions, discuss changes to the building layout (i.e., a closed door or work area) amongst themselves, and are internet-enabled so as to parse meaning from natural language requests. A fascinating example of the junction of these technologies came together when a COBOT was asked to fetch coffee; it parsed the voice command, used the internet to pair the concepts of ‘coffee’ and ‘kitchen’, navigated to the kitchen, requested coffee from a human, and returned to the original requester. Though this generation of COBOTs are more limited than the general-use butler robot we see in science fiction, their current abilities are both an impressive achievement and a solid foundation for AU as an augmentation to human intelligence and quality of life.

Day 2:

The second day of the EmTech Digital conference was attributed to displaying our progress with self-driving vehicles. Chris Urmson, Director of Self-Driving Cars at Google, put on an impressive display of the level of technology and machine intelligence that goes into the process of creating driverless vehicles. The demo consisted of a recording of the sensor inputs and the processing of data. It showed how the sensors captured everything in their environments, including people, signs, lights, cyclists, lane markers, and so on. One scenario which was demoed included a cyclist suddenly appearing in front of a Google car, and showed how quickly the car responded to the situation while still considering the other environmental variables. After the demo, his sentiment of “humans are terrible drivers” was echoed for the rest of the day.

The ByteCubed Perspective:

So what does this mean for ByteCubed, and how is the current state of this technology applicable, or even practical? Machine learning is a concept with endless application, as long as you have the data to back it up. The federal government is a gold mine when it comes to data. There are petabytes of it, yet a lot of it is lost in translation or simply not utilized effectively. With artificial intelligence and machine learning, an enormous amount of meaning and context can be given to that data, and the information that it can potentially provide is invaluable. Things such as predictive analytics are possible with machine learning, which is already used to predict trends in the stock market to reasonably high success. This will allow the data that we currently have to not only be used for information about the present, but possibly analyze future scenarios, and catch curve balls before they hit us. The amount of application and augmentation of these new technologies is endless. Providing context will greatly increase the efficiency of the government, which is something that is very much needed.

Comments ( 0 )

Leave a Reply

Your email address will not be published. Required fields are marked *