Categories
innovation

Increase your creativity 60%

Let’s face it, we’ve all been spending a lot of time sitting in front of Zoom meetings and it’s kinda dumb. There’s an easy fix and there’s research that it not only makes us a little more creative, it makes us a lot more creative.

When you’re trapped in an office, I get it. But so many people are working from home and yet they’re stuck doing the same thing over and over again.

Let’s mix it up…. Grab your phone, headphones and go for a walking meeting.

Why are Zoom meetings so exhausting?

  • We are tuned to watching body language
  • Tuned to eye contact
  • Constantly scanning for where to look
  • In a meeting you can catch glances, share a moment and have little side-bar conversations.

Online meetings don’t really let you do that.

  • Looking for eye-contact but you can’t find it.
  • Looking for micro-expressions but they are harder to see
  • When we’re on video chat we’re always looking at people’s eyes and if they aren’t looking at us, we think they’re ignoring us. Humans are just wired that way.
  • And sometimes the audio is just a little out of sync from people’s lips and it drives our brains crazy.

Don’t get me wrong. I’m a big proponent of turning on the video camera and I think Zoom is a great tool, but everything in moderation and staying in this type of zone all day is mentally draining.

Walking Meetings

I’ve been doing walking meetings for years and walking literally gets you to change your perspective. When you’re walking you’re more focused and tend to listen better. You’re not fidgeting with your phone or refreshing your newsfeed in another tab.

Because you’re not focused on trying to match people’s facial expressions you can focus more on what they are saying. There are countless health benefits to walking meetings, but don’t just do this for your physical health. It’s good for your mental health and creativity too.

I started doing walk & talk meetings about 10 years ago. I’m a big fan of the Show West wing where Aaron Sorkin would take people on a walking journey all while telling a story. The visual cue advances the story and it also gives both sides a chance to talk.

When I would go on a walking meeting with someone you can’t see their facial expressions so you’re really concentrating on what they are saying.

The other reason to go for a walk during your meetings is that it can make you more creative. A lot more creative. According to a study out of Stanford, it can increase your creativity by as much as 60% for tasks where you’re thinking of novel ideas.

They studied the effect of walking on creativity and it doesn’t even have to be walking outside. They got the same results from walking on a treadmill.

So if you’re feeling burned out on Zoom meetings, know that it’s how your brain is wired. Grab your phone, mask, headphones and go for a walk.

Categories
design innovation startups

Find Product Market Fit Fast

Product Market Fit is one of the hardest things for an early-stage startup to achieve and it’s a critical step for companies looking to scale and be successful.

Product/market fit means being in a good market with a product that can satisfy that market

Marc Andreesen – Andreessen Horowitz

So the first thing you need to do is to understand your target market. Are you building a product for the automotive market, the food & beverage market, the software or technology market or something else? To find product-market fit, you really need to narrow your market and niche down. Don’t try to make your product solve the problems of multiple markets early on. Identify a core initial target market.

Once you know your target market, make sure you really understand and research it. You can’t possibly expect to satisfy your customers let alone a target market unless you really understand their problems.

In researching a market and the problems within that market many entrepreneurs will start to identify problems. Once you have a couple conversations you’ll see patterns emerge in terms of problems that people are experiencing.

In the early stages of a startup it’s typical to build early solutions to those problems. And when these solutions solve those specific problems you have product-problem fit. You’ve identified a problem and you’ve provided a solution… many entrepreneurs think that they are set and they stop there but finding product-market fit is more complex. You need to ensure that your product doesn’t just solve a specific problem, but rather it solves a problem that is repeatable and consistent across a large market segment.

To do this you need two core things:

  • First: You need a good cross-section of customers across your market. Not just your friends’ circle, but knowing that a good portion of any customers in your target market have a similar problem you can address.
  • Secondly: Your product has to be sticky enough that people are upset if you were to take the product away.

Finding and solving a problem is a great start but to really find product market fit you need to make sure that the problem you’re solving is widespread and impacts a large enough market in a scalable way and that the solution doesn’t feel like a NICE-to-Have but rather a NEED- To-Have.

It’s better to make a few users love you than have a lot that are ambivalent.

Paul Graham – YCombinator

You need people to care and the best way to find out if they do is to ask them. Ask your users how they’d feel if they could no longer use your product. The group that answers ‘very disappointed’ will unlock the product/market fit.

Sean Ellis, who ran early early growth in the early days of Dropbox, LogMeIn, and Eventbrite and adviced that if 40% of your customers would be “Very Disappointed” then you’ve found product market fit.

When thinking about product market fit, it’s worth also considering founder-market fit. Some founders have deep experience with a particular market. Maybe they spent a decade at a large company within the target market so they know the right people and they know the problems that are un-solved. Sometimes having a good founder-market fit can be a huge advantage and investors will consider how well a founder is aligned to a market. On the flip-side sometimes founder-market fit can be road-block. Consider how sometimes only an outsider to a market can realize just how broken a market is. If Uber had deep market experience in the Taxi market they may never have build as disruptive a company.

Finding product-market fit is both one of the most misunderstood and difficult steps for any growing startup. Keeping yourself focused on the customer and how that relates to the larger market will keep your company on track.

Categories
innovation technology

GTP3 – Will AI replace programmers?

Huge thank you to those who checked out my last video, I was taken aback by the reaction and it was awesome to interact with so many people from around the world.

Reactions tended to break down into two categories.

  • This is so cool
  • OMG – my job! Will AI replace programmers?

I want to show some additional GPT3 demos and I also explain some of the different types of AI, machine learning, and how it’s amazing and cool but you don’t have to worry about your job, at least not YET! We do however need to worry about AI having bias and being racist, sexist, and more.

Let’s start with some recent highlights…

A context aware dictionary that knows the definition of a word based on the context.
An example of image recognition paired with GPT3 to show good and bad ingredients in a product.

An example of using GPT3 as a function within a spreadsheet.
An example of CSS and layout generation using GPT3
https://thoughts.sushant-kumar.com/. An example of a quote generator based on GPT3
An example of UI generation within Figma using GPT3
An example of using GPT3 to write SQL queries

These demos are amazing progress but in order to unsertand why programmer jobs aren’t in gepordy you need to understand the basics of AI. There are four categories to consider:

  • 1. Reactive machines – Big Blue / Chess-playing – they look at the environment and react.
  • 2. Machines with memory – These are AI’s that can remember things over time and learn from their environment. Self-driving cars are an example of this.
  • 3. Theory of mind – This is when a machine can think and understand that others can think. It can put itself in the shoes of someone else and serve basic needs and functions in a general way. This is called Artificial General Intelligence
  • 4. Self-aware – This is when a machine can has the abilities of the previous categories and can also understand it’s own existance. This is in the realm of science fiction and both categories #3 & 4 are theoretical areas of research and we’re not close to these yet.

GPT3 is mostly #1. While it has a lot of data it’s not designed to remember things from session to session. The model is pre-trained, that’s the PT in GPT and you can think of it as the world’s most sophisticated auto-complete. Similar to how when you start typing and Google completes the sentence. GPT is able to complete questions, code, html files, and more. Because it’s trained on so much data the auto-complete has context but not memory. It’s incredibly good but it’s not perfect and it isn’t tested as valid.

Most of the time the output of GPT3 will be a starting point, not the final product. In the examples above the HTML, SQL, CSS, and text that is produced is most likely to be a starting point but its quality and fidelity, while impressive is unlikely to be a final result.

As I said GPT3 is an amazing piece of technology and I can understand why people may worry about their job. Technology can cause this concern and it’s been going on since Aristotle and Ancient Greeks. Farmers have worried about tractors. Scribes worried about the printing press and mathematicians and typists worried about computers. There’s a term for this is Technological unemployment.

While technology can eliminate or shift jobs it also tends to create new jobs and new opportunities. Even if GPT3 is really good, the world will still need engineers, designers, poets, and creators, perhaps more than ever.

The problem with AI

I tend to be an optimist but there are areas that still need a lot of work when it comes to AI and in particular, bias tends to be a real problem.

Here Chukwuemeka shows an image recognition that isn’t trained with diversity in mind…

This is why diversity in technology is so important and it’s also why we need to be careful about the data that’s driving and powering the worlds most powerful autocomplete. AI tends to work off of large collections of data. This can be imaging data, text data. If we’re not careful about the input data and testing it can produce problems.

In another study out of MIT Joy Buolamwini explores the notion of algorithmic bias and how it can be a huge problem.

Joy has a great TedX talk on this topic if you want to learn more.

As developers start incorporating GPT3 into their products and technologies, it’s important that they consider all sorts of biases that may be in the data.

Bad jokes, offensive ideas, historical inaccuracies, propaganda, sexism, racism, and more. In the billions of tokens that GPT3 has processed, it’s gotten good at auto-completing many things including some that we may offensive, inaccurate, or even dangerous.

Sam Altman – one of the founders of OpenAI touched on this recently in response to Jerome Pesenti, the head of AI at Facebook:

It’s great that OpenAI is taking bias seriously and it’s important that engineers building and incorporating AI into their products consider how their training data may have biases.

Thank you to everyone who watched my last video and checked out this post. I’m incredibly grateful to you for your feedback and comments. If you’re new to the blog, I tend to talk about entrepreneurship, technology, design, so if you like that sort of thing you can sign-up to get updates when I post. You can also subscribe on YouTube if you prefer.

Categories
innovation technology

GPT 3 Demo and Explanation

Last week GPT3 was released by OpenAI and it’s nothing short of groundbreaking. It’s the largest leap in artificial intelligence we’ve seen in a long time and the implications of these advances will be felt for a really long time.

GPT 3 can write poetry, translate text, chat convincingly, and answer abstract questions. It’s being used to code, design, and much more.

I’m going to give you the basics and background of GPT3 and show you some of the amazing creations that have started to circle the Internet in just the first week of the technology being available to a limited set of developers.

Let’s start with a few examples of what’s possible.

Demonstration of GPT-3 designing user interface components:

The designer is able to describe the interface that they want and the GPT3 plug-in to Figma is able to generate the UI.

GPT3 creating a simple react application:

Here the developer describes the React application that they want and the AI writes a function with the hooks and events needed to function correctly.
A couple of examples of GPT3 creating paragraphs of text based on initial cues of what is needed.
In this example GPT3 is able to complete an Excel table of data.
This example demonstrates a web plug-in to find answers within a Wikipedia article.
Example of using GPT3 as an answer engine for arbitrary questions.

Background

GTP3 comes from a company called OpenAI. OpenAI was founded by Elon Musk and Sam Altman (former president of Y-combinator the startup accelerator). OpenAI was founded with over a Billion invested to collaborate and create human-level AI for the benefit of the human race.

OpenAI has been developing it’s technology for a number of years. One of the early papers published was on Generative Pre-Training. The idea behind generative pre-training is that while most AI’s are trained on labeled data, there’s a ton of data that isn’t labeled. If you can evaluate the words and use them to train and tune the AI it can start to create predictions of future text on the unlabeled data. You repeat the process until predictions start to converge.

The original GPT stands for Generative Pre Training and the original GPT used 7000 books as the basis of training. The new GPT3 is trained on a lot more… In fact it’s trained on 410 billion tokens from crawling the Internet. 67 Billion from books. 3 Billion from Wikipedia and much more. In total it’s 175 Billion parameters and 570GB of filtered text (over 45 Terrabytes of unfiltered text)

Over an ExaFLOP day of compute needed to train the full data set.

The amount of computing power that was used to pre-train the model is astounding. It’s over an exaflop day of computer power. One second of exaflop computer power would allow you to run a calculation per second for over 37 Trillion years.

The GPT3 technology is currently in limited Beta and early access developers are just starting to produce demonstrations of the technology. As the limited Beta expands you can expect to see a lot more interesting and deep applications of the technology. I believe it’ll shape the future of the Internet and how we use software and technology.

Links: