I was reminded earlier this week about how long it takes to build a company, a venture firm, and for technology to reach escape velocity. The headlines are AI, AI, AI but after a recent meeting with an Institutional LP, I took a walk down memory lane as he reminded me of the deck we sent him in 2017!
Here is one of the slides from our fundraising deck highlighting Intelligent Automation as a core theme. If you click through the post below, you will also find a few more enduring themes which at the time were super exciting but in retrospect still early!
Here’s another slide from that deck in 2017:
Given how dialed in we at boldstart were around the idea of intelligent automation, we worked with several CIOs on boldstart automation day outlining what we thought was the future. Here’s a slide deck from August 2018 you should check out.
As you can see, we’re almost there delivering the vision from 6 years ago with agentic workflows! Which is why when we met João from CrewAI, we were blown away by what he was building. Super cool to now see CrewAI as a course offered by Andrew Ng!
Here’s Microsoft from Build Day this week launching AI flows or AI powered workflows.
One of the more interesting announcements at Build, however, is about how the company is bringing generative AI to Power Automate, its process and workflow automation platform. And while process automation may not exactly be a topic that sets your world on fire, this may be one of the areas where generative AI can create real value.
Starting soon, for example, you will be able to take any repetitive desktop workflow and not just record it with Power Automate Desktop but also narrate what you are doing to Power Automate’s AI Recorder. Through this, the service can then combine voice and a screen capture to create more resilient workflow automation. One nifty aspect of this is that those automations are less likely to break when there are small changes to the user interface, too.
Seeing and building the future takes time!
Sticking with the idea that we overestimate the effect of technology in the short run and underestimate the impact in the long run, I noticed Gartner released its latest AI Hype Cycle, so I compared to the last one from 2019 (excerpt from my LinkedIn post)
Enterprise tech rhymes and repeats and also takes a long time...
The investments made in the former ML/AI wave in 2017-2019 have set the foundation for today
Take a look at the Gartner Hype Cycle for AI, then and now, 2019 and 2023
AI cloud services, data labeling all moved along curve to Plateau of Productivity in 2019-2023 and AGI super early in Innovation Trigger has moved along curve nicely in 2023
Also about to peak in 2019, "Intelligent Applications":
Intelligent applications are enterprise applications with embedded or integrated AI technologies to support or replace human-based activities via intelligent automation, data-driven insights, and guided recommendations to improve productivity and decision making.
That sounds a lot like what Satya announced in October 2022 - "it's going to be part of every product" - it never got there back from 2019 as the AI wasn't good enough and now it's being integrated into every single app from every vendor at a pace I've never seen before.
Satya quote here.
Gartner 2019 definition of Intelligent Apps here.
Look at the Peak of Inflated Expectations 2019 - Chatbots
I'd argue that those are so back...but of course, that UI will evolve with what Gpt4o is, the Microsoft AI PC, desktop apps for Mac, etc
At same time looking at both of these hype cycle charts, the transformer architecture obliterated a few areas and changed the curve as Generative AI was nowhere to be found in 2019 and at the Peak of Inflated Expectations in 2023
What will the Hype Cycle for AI look like in 2028 or 2029? Still working off same curve or will there be another massive leap beyond transformers to significantly alter this curve?
I have a sneaking suspicion that the current transformer architecture will be replaced by something else 6-7 years from now so I imagine that the curve above will be completely changed again. Given this fact, what should someone build in the new age of AI? Kevin Scott, CTO of Microsoft, has some suggestions (🎩 Dan Shipper)
What you should work on in the Al era
Scott believes that you should find things to build that used to be impossible that are now merely hard. He contrasts this with things that used to be hard and are now easy, which he thinks will not end up being durable businesses.
In a comparison to the mobile era, fart apps (easy to build) are no longer around, but Uber (hard to build) is. He thinks the same thing will be true in the Al era.
Find what used to be impossible, and build it.
How to figure out what's merely hard —and not impossible
The trickiest question in technology is how to figure out if something's hard but doable, instead of impossible. Scott suggested finding things "right at the ragged edge" by playing with new models as soon as they are released, trying to figure out what their new capabilities are.
He talked about Github Copilot-Microsoft's programming assistant —as an example. When it was first prototyped, its suggestions were only accepted about 30 percent of the time. Many people internally believed that was far too low and wrote off the project. In his view, though, that 3o percent was already surprisingly good.
"If you can see any glimmer of hope, whatsoever," he said, "those are the places to look. Not all of those will lead to success, but big ideas will start there."
As always, 🙏🏼 for reading and please share with your friends and colleagues. For those in the US, have a wonderful Memorial Day weekend!
#Yes, seed is broken, it’s confusing - thanks to Mattias Ljungman for expanding on my original framework on Inception Investing and the 3 types of Inception rounds: Discovery, Classic, and Jumbo. So what’s after Inception? Seed Expansion - read Mattias’ post for more, and if you agree chime in here 🧵
The 2 stages of Seed
Ed Sim at Boldstart Ventures recently coined “Inception Rounds” to redefine what first money in really means in this new world, describing rounds that occur before incorporation and range from $2m to larger than $6m.
It’s a valuable framework, but there’s room to expand it, thinking about the specific stage and needs of the company, not just the amount and timing of the funding.
The temptation is to reclassify it all – but I didn’t want to create something too detailed, too fragile for an industry whose milestones change with each cycle. It would quickly become irrelevant.
I wanted a simple, useful heuristic, so I propose categorising Seed into two distinct groups: Seed Inception, which Ed Sim describes well as initial ticket going in, and Seed Expansion, which describes rounds where founders have already achieved some milestones and metrics.
#A great reminder to enjoy the journey and not just the destination (Ryan Holiday)
Focus on effort, not outcomes.
It’s insane to tie your wellbeing to things outside of your control, Marcus Aurelius said. If you did your best, if you gave it your all, if you acted with your best judgment—that's a win…regardless of whether it’s a good or bad outcome.
#Founders and CEOs can be described as Chief Cook and bottle washer and this includes Jensen Huang, founder and CEO of Nvidia
Nvidia CEO: “you cannot show me a task that is beneath me.”
The enemy of continuous progress and growth is arrogance, zero sum mindset and a sense of entitlement.
Video here!
#While RSA security is over, I always enjoy reading the Cybersecurity Perspectives report from my friend Ariel Tseitlin and Scale Ventures which is based on a survey of CISOs, CIOs, security practitioners…
For our 2024 report, our research goal was to understand how security leaders are responding to technology shifts like the migration to the cloud and AI, especially within the context of increasing resource constraints.
*Identity-based attacks remain high. Nearly 50% of firms lost credentials to phishing and third-party attacks (i.e. an outside vendor, supplier or partner in an organization’s supply chain), as cybercriminals used legitimate identity privileges to spread ransomware, exfiltrate data, and extort victims.
*The variance in the types of attacks security teams face is increasing over time. 76% of companies reported three or more different security incident types. Over two years, firms with four incident types increased 341% YoY, up from 7% to 29%.
*The skills shortage persists. For the second year in a row, cloud infrastructure security is reported to be the hardest role to fill on security teams.
*Leaders are finding ways to invest in innovative solutions. 89% of security leaders indicated that AI is important to improving their security in 2025. And, despite slowing budget growth, firms allocated 29% more budget for new, innovative, and experimental security solutions this year.
*Security teams are struggling to protect AI models while trying to fully realize AI’s true potential. Security incidents caused AI model drift at 11% of surveyed enterprises last year, up 304% YOY.
#As identity-based attacks remain high, check out the 4th Annual State of Passwordless Identity Assurance report from Hypr, a port co
This year's report goes beyond our tradition authentication focus and covers everything from identity-related attacks and breach costs, all the way to IT helpdesk, Gen AI encouragement & concerns, as well as current mindsets around passkeys/passwordless. This report marks an important evolution as our platform also continues to cover a wider breadth of identity lifecycle challenges.
A few of these stats from the report include:
* 78% of organizations have been targeted by identity-based cyberattacks
* 91% of breached organizations name credential misuse or authentication weakness as a root cause
* 3 in 5 name AI-powered threats as their top identity security concern
* 89% believe passwordless MFA provides the highest level of security and 41% intend to adopt it over the next 1-3 years
* Identity proofing/verification named one of their top security challenges (37%)
#Problems with Retrieval Augmented Generation (RAG) from Aaron Levie
One hard problem with AI right now is retrieval augmented generation (RAG) with wide-ranging heterogeneous information. A common architecture pattern in AI right now is that you connect up a large amount of data to an AI model, and when a user or machine sends in a query, you find the best matches from the underlying data set and then send that information to the AI model to answer the user's prompt. This is a very efficient way to be able to have an AI access information that is frequently changing (like web data) or potentially wouldn't be appropriate to have in an underlying training set of the AI model (like private corporate data).
This is a relatively fundamental and breakthrough architecture in AI, but there's a small catch. The AI's answer is only as good as the underlying information that you serve it in the prompt. And because the user isn't the one giving it the data, but instead a computer, you're at the mercy of how good that computer is at finding the right information to give the AI to answer the question. Which means, of course, you're also at the mercy of how good, accurate, up-to-date, and authoritative your underlying information is that you're feeding the AI model in the prompt...
More here:
#14 Lessons learned as Karina Nguyen moves from Anthropic to OpenAI - #10 is spot on, show the future as well as tell the story
Life update: After ~2 years at Anthropic, I joined OpenAI! This wasn’t the easiest decision and I’m very grateful to everyone who is supporting me through this transition, especially John, Barret, Boris, Mira, and Sam.
I joined Anthropic as the first designer / front-end engineer when there were ~60 people and left as a researcher when there were >500. I learned so much and hope that every project I’ve worked on, be it a UI, a paper, or a trained Claude model, will still carry all the love and care I put in. Some lessons learned:
1. The pace of a team’s progress is largely a function of its decisiveness and open-mindedness to take risky paths.
2. Every time you train a new model there will be an inevitable brain damage that needs to be solved and often you can reverse engineer the issue by carefully looking at the data.
3. The simplest and dumbest approach will often just work.
4. You have to go through the entire journey of full understanding to arrive at the simplest answer.
5. When technology is so transformative, it’s your job to tell customers what they need to do with it to solve their problems.
10. Being the first design-oriented person is challenging and often you will end up teaching people how to think rather than doing things. But you can learn a great deal about fundraising and shape the public perception by continually making beautiful demos and communicating research clearly.
Rest here:
#Microsoft recording your screen for AI - some wild use cases…watch video 🤯
Microsoft just previewed Copilot’s new feature powered by GPT-4o.
By enabling screen sharing, the AI can watch and understand what you’re on your computer and answer questions in real-time.
It's essentially the new ChatGPT Mac App, but for Windows.
#Still early for the enterprise - here are some comments from JPM’s Investor Day this past week (PYMNTS). If interested the Investor Day slides here and if you dig into the firm overview, you’ll also see the JPM is increasing its IT budget by another $1.5B from $15.5B to $17B in 2024. That is an insane 🤯 number, and despite how large an organization it is, JPM has always done an amazing job also working with innovative startups who see the future.
The banking giant is having all new hires undergo artificial intelligence (AI) training, Mary Erdoes, who runs the firm’s asset- and wealth-management unit, said Monday (May 20).
“This year, everyone coming in here will have prompt engineering training to get them ready for the AI of the future,” said Erdoes, whose comments at JPMorgan’s investor day were reported by Bloomberg News.
She said AI is helping her unit in two ways: improving revenue growth and saving time. There’s less focus on “hunting and pecking,” she said, as bankers can now pull up certain information on potential investments while clients are on the phone.
In addition, AI is doing away with “no joy work” by getting rid of rote tasks and saving some analysts two to four hours of their workdays, Erdoes said.
JPMorgan President Daniel Pinto told the audience that the bank sees AI as worth between $1 billion and $1.5 billion, and that the technology will have a “very, very” large impact for the firm’s 60,000 developers and 80,000 operations and call-center employees — close to half the company.
#Holistic AI in France raised a $220 Million Inception Round working on agentic workflows, the next big thing
It’s not often that you hear about a seed round above $10 million. H, a startup based in Paris and previously known as Holistic AI, has announced a $220 million seed round just a few months after the company’s inception.
It has managed to raise so much money so quickly because it’s an AI startup working on new models with an impressive founding team. Charles Kantor, the startup’s co-founder and CEO, was a university researcher at Stanford.
The four other co-founders all previously worked for DeepMind, the AI company owned by Google: Karl Tuyls was a research director at DeepMind...
As you may have guessed, H is going to work on AI agents: automated systems that can perform tasks that are traditionally performed by human workers. The company’s minimalistic site states that H is working on “frontier action models to boost the productivity of workers.
#Data labeling startup ScaleAI raises $1B and doubles valuation to $14B on $700M revenue in 2023 (FT)
It was launched in 2016 to label the images used to develop autonomous driving systems. Since then it has grown rapidly by providing enormous volumes of accurately labelled data to train tools such as OpenAI’s ChatGPT.
The company, which was valued at $7.3bn in 2021, employs a vast network of employees, many of them contractors, to do the labelling. Scale’s revenues were roughly $700mn last year, according to a person with direct knowledge of the matter.
Its focus on what Alexandr Wang, co-founder and chief executive, describes as “one of the least sexy problems in AI” has given Scale a central position in the booming sector.
AI models have improved dramatically over the past 18 months, but further leaps — such as the capability to reason, interpret text, images and speech simultaneously, or complete multi-step tasks — rely on larger, more complex data sets.
#13 of the last 14 cybersecurity acquisitions all came from Israel? (Tomer Diari - Aleph referencing Bessemer Cybersecurity report)
Crazy stat - 13 out of 14 cybersecurity acquisitions in the last 6 quarters are Israel-based.
A consolidation motion is taking place in the cybersecurity industry. The shift from point solutions to platform has pushed the largest security vendors in the world to buy up emerging technologies and expand their product suite. In fact, it’s still going on, with the upcoming acquisition of Noname Security by Akamai Technologies.
My former colleagues Amit Karp, Michael Droesch and Yael Schiff at Bessemer Venture Partners just released a report on the global state of cybersecurity (link in the first comment), but it’s this one graph in particular that caught my attention.
More here...
#To that point, Cyberark out of Israel is buying Venafi for $1.54B to catpure the machine to machine identity space (TechCrunch)
The startup is described as a specialist in PKI and certificate management, and CyberArk says that the deal will expand its own total addressable market by $10 billion (to a total of $60 billion).
“This acquisition marks a pivotal milestone for CyberArk, enabling us to further our vision to secure every identity – human and machine – with the right level of privilege controls,” said Matt Cohen, CEO of CyberArk, in a statement. “By combining forces with Venafi, we are expanding our abilities to secure machine identities in a cloud-first, GenAI, post-quantum world. Our integrated technologies, capabilities and expertise will address the needs of global enterprises and empower Chief Information Security Officers to defend against increasingly sophisticated attacks that leverage human and machine identities as part of the attack chain.”
#Time to value, time to Aha moment matters - this from a recent study from Github and Accenture on Copilot - 60 seconds to first accepted suggestion!
In fact, 96% of those who installed the IDE extension started receiving and accepting suggestions on the same day. On average, developers took just one minute from seeing their first suggestion to accepting one, too. This was further validated in user surveys, with 43% finding GitHub Copilot “extremely easy to use” and 51% rating it as “extremely useful.”
#just a reminder, this AI thing is going to be big - remember the Field of Dreams, if you build it, they will come? Well, folks are building and investing at enormous scale $1 trillion to be invested in AI infrastructure in 10 years according to Daniel Ives at Wedbush. BTW, imagine back in the day, this money would have to have been invested by each startup who built its own cloud at co-lo data centers! Insane how far we have come!
So far this year, Microsoft and Amazon have earmarked more than $40 billion combined for investments in AI-related and data center projects worldwide.
Broadly, big tech companies are looking to “spread their wings” to international markets, Wedbush analyst Daniel Ives told The Wall Street Journal. “This is an AI arms race as Microsoft, Amazon and others skate to where the puck is going with this tidal wave of spending on the doorstep.”
DA Davidson analyst Gil Luria expects these companies to spend more than $100 billion this year on AI infrastructure. Spending will continue to increase in line with demand, Luria said.
Ives expects significant continued investment in AI infrastructure by tech companies over the next 10 years, “This is a $1 trillion spending jump ball over the next decade.”
#Who remembers Palm Computing? “In 2000, Palm was worth more than Apple, Nvidia & Amazon combined.” (Jon Erlichman)