Skip to content

On The Case Of Prediction

A few years ago when I first knew about deep learning I was very excited about neural networks and how they mimic the brain – or that’s what I thought, you lied to me hype news!- which took me straight to YouTube to learn about that newly formed biologically inspired magic of maths.

What was really inspiring to me was not the tasks they can do, rather the abstract notion of prediction. The ability of a computer programme to think and express its opinions. The ability of a GPU to determine whether if I’m most likely going to click on this advertiser’s ad or that one.

This was fascinating to me and that’s what got me into the field, and years after, it’s still fascinating to me! I almost lost my mind watching the match between alpha star and mana. I’m a big Starcraft 2 fan and I love watching matches between pros. But this time it was not a pro, it was an enormous neural network running on an ensemble of gpus! how fascinating indeed. To be honest, the thing is not fair and will probably require more constraints. However, watching that network beat mana was scary. it’s determined and planned to a scary extent.

But do you think one day a robot will be able to lead armies and troops invading another country? Will it be the case that we train them and play them against each other just so we can one day use them against each other?

Are we heading towards building a sentient generally intelligent entity or is it that we’re just good at curve fitting?

For example, look at these results on ImageNet and look how far and amazing our models are now. They may even be able to compete with humans! Maybe they’re not that good with language, but they’re certainly good with vision.

What I’m still trying to figure out is whether the so called AI is really intelligent or is it just doing behavior cloning. Does it understand what it’s doing or is it just trying to do something it doesn’t understand the best way it can by copying the behavior of its master?

The comparison is like comparing an engineering student to a technical student, like a lumberjack. The two students are different in terms of their mental state and their study practices.

The mindset of the engineering student is as follows:

  • Find a problem, for example: electronic engineering
  • Understand the fundamentals and the underlying theorems
  • Solve smaller problems and understand how they progressively turn into bigger problems
  • Perform inference on new problems you’ve never seen based on your understanding and mental state

The lumberjack has a totally different process.

  • They learn about the tools and how to use them
  • They learn about different kinds of trees and how to deal with them
  • And now they’re ready to start chopping trees off

Now when you train a deep ResNeXt network on imagenet, with that gigantic number of parameters and all the resources used for training. Is it really in the end trained to be the way an engineer is trained or a lumberjack?

Not that I’m trying to demean technical professions. They definitely need intelligence in their jobs. But they rely mainly on their muscle rather than their brain. So is ResNeXt an engineer-like or a lumberjack-like?

Let’s analyze the network:

  • Find a problem, for example: electronic engineering we do that for you
  • Understand the fundamentals and the underlying theorems It jumps directly to learning from examples and ignores prior knowledge
  • Solve smaller problems and understand how they progressively turn into bigger problems During training it tries to solve both hard and easy problems at the same time! Unless you’re following a technique like curriculum learning
  • Perform inference on new problems you’ve never seen based on your understanding and mental state

Looks like our network only scores 1/4 in the engineer test. On the contrary let’s assess its technical abilities.

  • They learn about the tools and how to use them
  • They learn about different kinds of trees images and how to deal with them
  • And now they’re ready to start chopping trees off predicting imagenet

Looks like it scored 3/3 in the lumberjack test. So, our beloved network is more of a technical learner than an in-depth scientific learner!

Doesn’t that remove the “intelligent” trait from AI? If AI is not general or sentient, we can live with that. But without being intelligent then how come it’s still called “Artificial Intelligence”!

If we lay off the articles from hype news that we read everyday about how AI is very smart and how it’s going to take our jobs by storm and started to focus on the other side of AI which is how to actually add the “I” to “AI” then we might start building something that is actually smart.

I’d like to leave you for a few seconds with this amazing video showing the state-of-the-art on self driving cars and autonomous robots, this should be the cutting edge technology that we have achieved in 2019!

AI is so smart it started to fight each other over domination.

Another thing that I wanted to discuss is why prediction is going to suffer very hard despite of the existence of these huge amounts of data. why don’t we have algorithmic trading machines that make their owners millions of dollars on autopilot?

Why can’t the most advanced machine learning algorithm predict the stock market correctly for the next few hours? Why isn’t it an easy process for these models even though they have historic data for up to the beginning of the stock market?

This is simply because human decisions are not always well justified and informed. People don’t always do the right thing. They don’t weigh causes the same way. They’re highly susceptible to surrounding noise be it the weather that day, a child crying in the background, their depression, a tight deadline, being drunk and many more reasons that make human decisions more or less unpredictable.

Watch this video while telling yourself “Can I predict the movement of each pedestrian correctly”

Can AI predict the movement of every and each human in this video? It can’t! Because of the amount of factors that control this extremely complex process. Every pedestrian’s mind is thinking of many thoughts at the same moment. where am I? where am I heading? at what speed should I be moving? should I go faster? did my phone just vibrate? oh look at that billboard! is that Paula from the literature class? am I gonna arrive by deadline? and so on.

From a philosophical point of view, AI still has a lot to do. scientists have a lot of paths to discover and there’s still a long journey to follow until we have something that we can remotely call intelligent. And until this moment comes we need to realize the true potential of our current systems and study their short comings in order to improve upon them and produce something that we can call intelligent.

Published inData Science

Be First to Comment

Join the discussion