The Internet is filled with many, many sources for learning AI, un-realistic expectations about AI, and also un-realistic pessimism. It may seem impossible to get a non-partisan outlook and understanding from this ocean of content (not to mention the presence of fake AI-generated text, AI-generated video, and AI-generated audio).
However, there is one way to get a much richer, more nuanced, and in-depth understanding of AI, and that's by working on it. You might be thinking, that's ridiculous, I can't just learn AI so easily. And you'd be half-right. You can't learn how to build production-ready AI models today, but you can learn the basics very fast. Let's get started.
If you have a few months of spare time, there are great ways to get a more in-depth understanding of AI, but this is for you to be able to understand the space better.
First off, you need an environment to build your AI. We'll be using Google Colab. Think "Google Docs for data science." The type of AI we'll build falls in the category of "machine learning", which is teaching computers to do tasks without explicitly programming them to do so—so let's use that terminology.
These are the basic steps for building our "machine learning model":
1. Acquire data (data is used for a model to understand patterns and trends, called "training," and can be in the form of text, images, audio, or other digital information)
2. Train the model (this is running the data through a model over and over (in "iterations") to reduce the error of a function. Basically, we're trying to fit an equation to data-points previously collected.)
3. Test the model (now, we "test" or check our model on un-seen data. We want to know how good our model does in the real world).
4. Run the model (do cool stuff with your model! In other words, actually using it.)
These are the basic, if grossly simplified, steps of any machine learning project.
Alright, let's go into the "code." It's fine if you don't understand exactly what it all means exactly, this is just to make the idea of "building AI" more tangible.
First off, we import "dependencies," or tools we need for the project. It's similar to if you were working on an art project with unique fonts, and you had to download the fonts somewhere else and import them.
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from fbprophet import Prophet %matplotlib inline
Now that we have our tools, let's grab our "data." We're going to predict future Bitcoin prices, so it's probably best to use past Bitcoin prices to "train" the model.
df = pd.read_html('https://coinmarketcap.com/currencies/bitcoin/historical-data/?start=20130428&end=20190316') df = df df = df[['Date', 'Close**']].dropna() df['Date'] = pd.to_datetime(df['Date']) df = df.set_index('Date') daily_df = df.resample('D').mean() d_df = daily_df.reset_index().dropna()
Next, let's predict the future. You probably remember y=mx+b for grade-school math class: fitting a line of best fit to a set of data-points. Think y=mx+b but on steroids.
d_df.columns = [‘ds’, ‘y’] m = Prophet(daily_seasonality=True) m.fit(d_df) future = m.make_future_dataframe(periods=90) forecast = m.predict(future) forecast[[‘ds’, ‘yhat’, ‘yhat_lower’, ‘yhat_upper’]].tail()
Finally, let's check what our model predicted. It only ever saw data from a while back, so we can look back and see how well it did.
And... that's it. No, you're not on the same level as a mathematics PhD after reading this 5-minute article, but hopefully you can now better put in context AI material you read and see.
To re-cap: AI does not "think" or "learn" on its own, it is built through deliberate effort and it makes probabilistic predictions about the future. And when I say "it," I mean lines of code like you read through.
This article was written by Frederik Bussler, CEO of bitgrit. We facilitate crowd-sourced AI with a community of data scientists, and are looking to solve industry AI problem statements for free as we launch our competition platform. Email [email protected] to discuss.