I’ve been angel investing for the last few years and I love working with founders, and entrepreneurs but some of that changed when I recently accepted a role to run the Techstars Boston Accelerator program.
So why? Why join a company when I could just angel invest and work with entrepreneurs on my own? Two key reasons, and these are reasons appropriate for founders too.
You can do more with a team. Yes, you can go it alone and there are plenty of successful solo-founders but I’ve found that you can accomplish a lot more as a team than you can on your own. For me the Techstars team will allow me to accelerate and invest in significantly more founders and companies than I could on my own.
You can do more with a community. As an individual I’ve established many amazing relationships in the Boston community but the Techstars brand and reputation is so much larger than any one person. I’m inheriting the power of amazing mentors, unicorn companies, past and present founders and partners who are cheering for us. It’s exciting and humbling. A community is so much stronger than anything that I could do on my own.
Ok, ok… Team and Community. Sounds nice… but WHY?
I asked myself this question, why do I like entrepreneurship, founders and investing? The answer for me was that I wanted to leave the world better than how I found it. Entrepreneurs and startups are best positioned to create meaningful change. If I can accelerate companies that do one of the following, I think I’ll be helping to make a long-term difference.
Save Lives – Companies that are doing the difficult work of developing technologies, medicines and other innovations that directly save lives or reduce suffering.
Save the Planet – We all share this marble called earth and there are many things about it that we know need fixing. This covers sustainability and clean-tech across a number of technology sectors.
Companies that profoundly change the way we live and work. This is the future of disruptive technologies, both the ones that are already in progress like crypto, robotics, remote-work and AI and future technologies that will change humanity for the better.
The team at Hey recently decided to publish an announcement about their internal company operations. You can read it here.
The blog post itself clearly articulates both a cultural and societal point of view. However it also points out that such discussions don’t make sense within the company.
First and foremost – Basecamp gets to choose their culture, and how they cover such topics. That being said they are using their platform to signal a significant shift and choosing to do so in public. It would seem that they are inviting debate.
No more societal or political discussions – The point of a company is to create a profit and generally provide some type of benefit. These benefits tend to be for the benefit of society. Not being able to discuss society seems problematic. Politics and company product decisions tend to go hand-in-hand. Privacy, security, censorship, are both product questions and political discussions. On January 6th Amazon, Twitter, Apple, Google, Facebook and others made technical product decisions that had massive political implications. The politics of a company speak to its culture and values and it helps both employees and customers decide if they want to support a particular company. Staying silent, actually speaks volumes.
I’m not suggesting that internal company discussions should focus on politics. It’s reasonable to encourage such discussions to move to other platforms but asking for zero political or societal discussion seems broken too.
No more paternalistic benefits – Benefits are there to help employees. When done well, benefits allow employees to concentrate on work. Health benefits, medical, dental, child-care, 401K, fitness benefits, etc. The whole point of these types of benefits is to provide mutual benefit to the employee and company. If done well these can provide huge financial, health, family and mental wellbeing and productivity impacts that individuals are unlikely to seek out on their own.
Focusing on financial funds implies that money is not just an important thing, it’s the ONLY thing that matters.
No more committees – It’s great to be able to have accountability within a company. The problem can be that individuals can impose norms onto an entire company. Committees can provide a voice to people who are otherwise either un-heard or marginalized. There are many topics that benefit from diverse voices. Having one person steer the ship makes sense but having many people helping plot the course and looking out for icebergs is even better.
No more lingering or dwelling on past decisions – Lingering on past decisions is not productive but it’s also a signal that something else could be wrong. This tends to happen when you don’t have team consensus or commitment to a decision. If you have consensus culture rather than an authoritative culture then decisions that are unpopular will end up being toxic to the organization. Organizations that are top-down will tend to repel employees who seek to be heard. The “My way or the highway” attitude can work for some things but it can close you off from hearing the problems.
tldr; Basecamp’s is very unique company. I don’t always agree with them but they spark good discussions. This is one where I think they missed the mark in terms of culture, values, and vision. I suspect this was prompted by something internal but the “fix” seems to be misaligned with the root-cause.
Companies have politics, committees and dwelling on decisions… Even if you say that they don’t, they still will. Solving the root issues around accountability, trust, and product vision will be the only thing that actually helps the company spend more time building better products.
GPT3 allows a simple and accessible API for accessing and using AI in an application. The application I wanted to create was a command line utility to quickly lookup commands and associated command line switches.
The language processing of GPT3 is well suited to process natural language questions and provide results across a wide multitude of obscure commands that would otherwise be difficult to memorize.
The development process starts in the playground of the OpenAI website. The examples section provides a number of examples, my command line bot was a modified version of a translation example.
You provide a number of examples and can then modify the options within GPT3 to increase or reduce randomness and provide alternatives for start/stop sequences. The beauty of GPT3 is that you don’t need to provide a lot of examples for the software to get really good at knowing the types of responses that you want.
With 5-6 samples the application was reliably producing useful results. The OpenAI playground makes it easy to export the API call as either a CURL command or as a short block of Python code.
The resulting Python code uses a library called openai and is otherwise six lines of code, two of those are actually unused. (yes I reported this bug.)
I had never previously written much Python code and Google Colab tool was a great way to get started. It’s essentially an interactive Python editing and debugging tool, and since it’s interactive and in the cloud there isn’t much setup needed.
I copied the exported code and started playing around. If you have a GPT3 API key you can try my initial version here.
In playing interactively, I was more easily able to figure out how to parse the JSON and I realized that including the platform name (Mac, Windows, Linux) helped the bot determine the appropriate platform specific commands and platform specific options and folders. This really improved the quality of results.
OpenAI API and limitations
The current version of the software doesn’t allow fine-tuning of the GPT3 completion API using your own database. For use-cases such as customer service or email completion, the ability to further train GPT3 on specific large data sets would be particularly useful. I do think this is coming as fine tuning was part of GPT2, but it hasn’t yet made its way into GPT3.
The other thing to note about the API is that it does take some practice and exploration. When providing a set of examples, it was difficult to unit-test the examples against known and desirable results. Very subtle changes in the prompts would yield very different results and the software would occasionally get stuck in a loop, feeding off of its own content.
Building the bot
Once I had the basic code working in Google Colab, I was able to get a version running on my computer. I had never programmed in Python and I ended up actually using GPT3 to help “auto-complete” some of my functions. I would do this by going back and forth with the playground with blocks of code. It wasn’t perfect but it felt much more natural and collaborative than the alternative of jumping back and forth between StackOverflow pages.
The core bot used the basic prompt from the playground example and a SQLite database to keep track of requests/responses and act as a local cache. This is likely overkill at this point but I thought it could be interesting if a general database of questions and answers could be compiled, filtered, enhanced and sorted over time. The database also acts as a speed and cost buffer since the GPT3 API is not free and not always the fastest.
$>cbot "How do I count the number of lines in a file?"
wc -l filename.txt
$>cbot "How do I get the mime type of a file?"
$>cbot "How do I create a file with the text 'hello world'"
echo hello world > hello.txt
$>cbot "How do I open php in interactive mode?"
$>cbot "How do I set my email using git config?"
git config --global user.email "[email protected]"
$>cbot What is the current date
After using the bot for a few weeks I started to add some more advanced functionality.
The -x option allows you execute the command directly. The -c option allows you to copy the answer into the clipboard (a little safer than just executing it.)
$>cbot -x "how do I put my computer to sleep"
The -g option allows you to ask general questions.
$>cbot -g "Who was the 23rd president?"
Lastly, you can use cbot to save command shortcuts. So if you’ve remembered an obscure command you can save it for later.
While this code is open source, the problem is that currently OpenAI isn’t available openly and that API keys are only available to select developers. This makes it currently impossible to publish open-source software that is meant to be used by typical end-users.
I am hopeful that end-user API keys will be made available so that open-source AI software and tools can be made more broadly available.
I took a 32″ eInk display and turned it into a digital newspaper that updates every day. It’s silent, wireless and can run for months without being plugged in.
The display is based on the Visionect 32″ place and play display. This works by running two components. The eInk display acts as a thin client and has very little processing power. The eInk requires no power and the rest of the hardware just listens on an open port drawing very little power.
The display is 99% more power efficient than a traditional LCD, so the display can run for months without being charged. Because the display is a thin client it requires two external components to make it work. The first is an HTML rendering server. This fetches web-pages and renders them as a headless browser. It can then push images to the display. The second is an application server that fetches newspapers from around the country, downloads the PDF’s and turns those into images and HTML that can be processed.
The HTML rendering server runs off of a docker container provided by Visionect. A standalone server would be ideal but I couldn’t find documentation on the client/server protocol. This may be a future exploration.
Unless you are technical, I wouldn’t recommend running out and buying one. The server is setup via a docker container and I was able to get it running on my home Synology NAS backup server. The second part is the part that I wrote to fetch newspaper files from online sources like freedomforum.org, a non-profit, that works on the first amendment freedom of the press issues.
The newspaper portion runs as a simple web-application fetching large format PDF files and resizing them to fit the large format display. You can find code for the project on Github. The result is a display that is both engaging and passive. There’s no buttons, no UI, nothing to touch or fiddle with… The newspapers cycle every 10 minutes. In the morning there’s always a fresh front-page to skim. The beauty of eInk is that it’s 99% more efficient than traditional displays like LCD. This means that the display can run for months and when it needs charging I can simply top-it-up.
Why did I build this? I saw something similar online a few years ago and I couldn’t find it for sale so I decided to built it myself. I worked on a newspaper when I was in college and my editor in chief Mike Bossi would always tell people to read multiple newspapers. He said that the truth is never in one paper or one story. Every writer has bias. By getting multiple perspectives you get a better picture of the truth.
This digital display is a little reminder of that. While most websites are powered by content management systems and templates. Traditional newspapers are sill designed by hand.
The design principals that work well in newspapers can also be applied in the design of our digital products. Balance, Prominence, Margins, Columns and more. A great newspaper design helps readers skim, read, digest and understand and when you can understand without any UI at all, that’s something special.