Huge thank you to those who checked out my last video, I was taken aback by the reaction and it was awesome to interact with so many people from around the world.
Reactions tended to break down into two categories.
- This is so cool
- OMG – my job! Will AI replace programmers?
I want to show some additional GPT3 demos and I also explain some of the different types of AI, machine learning, and how it’s amazing and cool but you don’t have to worry about your job, at least not YET! We do however need to worry about AI having bias and being racist, sexist, and more.
Let’s start with some recent highlights…
These demos are amazing progress but in order to unsertand why programmer jobs aren’t in gepordy you need to understand the basics of AI. There are four categories to consider:
-
- Reactive machines – Big Blue / Chess-playing – they look at the environment and react.
-
- Machines with memory – These are AI’s that can remember things over time and learn from their environment. Self-driving cars are an example of this.
-
- Theory of mind – This is when a machine can think and understand that others can think. It can put itself in the shoes of someone else and serve basic needs and functions in a general way. This is called Artificial General Intelligence
-
- Self-aware – This is when a machine can has the abilities of the previous categories and can also understand it’s own existance. This is in the realm of science fiction and both categories #3 & 4 are theoretical areas of research and we’re not close to these yet.
GPT3 is mostly #1. While it has a lot of data it’s not designed to remember things from session to session. The model is pre-trained, that’s the PT in G PT and you can think of it as the world’s most sophisticated auto-complete. Similar to how when you start typing and Google completes the sentence. GPT is able to complete questions, code, html files, and more. Because it’s trained on so much data the auto-complete has context but not memory. It’s incredibly good but it’s not perfect and it isn’t tested as valid.
Most of the time the output of GPT3 will be a starting point, not the final product. In the examples above the HTML, SQL, CSS, and text that is produced is most likely to be a starting point but its quality and fidelity, while impressive is unlikely to be a final result.
As I said GPT3 is an amazing piece of technology and I can understand why people may worry about their job. Technology can cause this concern and it’s been going on since Aristotle and Ancient Greeks. Farmers have worried about tractors. Scribes worried about the printing press and mathematicians and typists worried about computers. There’s a term for this is Technological unemployment.
While technology can eliminate or shift jobs it also tends to create new jobs and new opportunities. Even if GPT3 is really good, the world will still need engineers, designers, poets, and creators, perhaps more than ever.
The problem with AI
I tend to be an optimist but there are areas that still need a lot of work when it comes to AI and in particular, bias tends to be a real problem.
Here Chukwuemeka shows an image recognition that isn’t trained with diversity in mind…
This is why diversity in technology is so important and it’s also why we need to be careful about the data that’s driving and powering the worlds most powerful autocomplete. AI tends to work off of large collections of data. This can be imaging data, text data. If we’re not careful about the input data and testing it can produce problems.
In another study out of MIT Joy Buolamwini explores the notion of algorithmic bias and how it can be a huge problem.
Joy has a great TedX talk on this topic if you want to learn more.
As developers start incorporating GPT3 into their products and technologies, it’s important that they consider all sorts of biases that may be in the data.
Bad jokes, offensive ideas, historical inaccuracies, propaganda, sexism, racism, and more. In the billions of tokens that GPT3 has processed, it’s gotten good at auto-completing many things including some that we may offensive, inaccurate, or even dangerous.
Sam Altman – one of the founders of OpenAI touched on this recently in response to Jerome Pesenti, the head of AI at Facebook:
It’s great that OpenAI is taking bias seriously and it’s important that engineers building and incorporating AI into their products consider how their training data may have biases.
Thank you to everyone who watched my last video and checked out this post. I’m incredibly grateful to you for your feedback and comments. If you’re new to the blog, I tend to talk about entrepreneurship, technology, design, so if you like that sort of thing you can sign-up to get updates when I post. You can also subscribe on YouTube if you prefer.