| | Good morning and happy Friday. The Information reported that OpenAI execs are thinking about significantly increasing the subscription price of ChatGPT — apparently, prices ranging up to $2,000 per month were on the table (though it’s unlikely to be that high). | This is interesting in the context of the AI boom or bubble … at some point, developers need to generate a real return on spend, and what that effort might look like has been up for debate. | A shift to an expensive subscription might well price a lot of folks out. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: NASA is predicting water quality in the Chesapeake Bay | | A satellite image of the Chesapeake Bay (NASA). |
| In 2022, NASA teamed up with the National Oceanic and Atmospheric Administration (NOAA) and the Maryland Department of the Environment (MDE) to develop machine learning software that would better allow scientists to predict water quality. | They decided to begin with the Chesapeake Bay, which contributes about $9 million annually to the state’s economy. | The science: Though the project is still in its early phases, the scientists have so far been able to demonstrate that their software can train machine learning algorithms that are able to identify “precursors to poor water quality in satellite images” taken over the Bay. | The idea is that satellites, combined with ML algorithms, will allow local resource managers to find and rectify instances of pollution before they can harm people or the environment. The next area they hope to explore involves predicting pollutants that are optically invisible, which involves training algorithms to identify other patterns, including areas of low oxygen. They have already seen promising early results in this arena.
| Why it matters: Similar to medical applications, the name of the game is early awareness, which allows for early intervention. The sooner we know what’s going on (and where), the better we can prevent things from getting worse. |
| |
| | | | Don’t pay for sh*tty AI courses when you can learn it for FREE! | This incredible 3-hour Workshop on AI & ChatGPT (worth $399) makes you a master of 25+ AI tools, hacks & prompting techniques to save 16 hours/week and do more with your time. | Sign up now (free for first 100 people)🎁 | This workshop will teach you how to: | Do AI-driven data analysis to make quick business decisions Make stunning PPTs & write content for emails, socials & more in minutes Build AI assistants & custom bots in minutes Solve complex problems, research 10x faster & make your simpler & easier
| You’ll wish you knew about this FREE AI masterclass sooner 😉 | Register & save your seat now! (valid for next 24 hours only!) |
| |
| The early days of Australia’s regulatory approach | | Source: Unsplash |
| Eager to appease the regulatory desires of its citizens, while also providing a clearer framework for businesses keen to deploy AI, the Australian government on Thursday published a list of voluntary AI safety standards. | The voluntary standard is intended to establish consistent practices for companies, while also laying the groundwork for future legislation (which would come in the form of mandatory guardrails). It serves the additional purpose of helping businesses clearly navigate AI deployments within existing Australian law. | The framework: The 10 guardrails call for the implementation of internal accountability processes, risk identification and mitigation processes and data governance measures to assure data quality and provenance. | They also call for deployers to test, evaluate and monitor their AI models, enable clear human control and oversight levers and clearly “disclose when you use AI, its role and when you are generating content using AI.” The remainder of the guardrails call for corporate transparency, record-keeping for third-party audits and the establishment of “processes for people impacted by AI systems to challenge use or outcomes.”
| The context: At the federal level, this puts Australia somewhat on par with the U.S., which has no federal AI policy aside from President Joe Biden’s (unenforceable) executive order on AI. The European Union, meanwhile, has its AI Act, which is already beginning to come into force. | It’s not clear when these guardrails — or some form of them — will become mandatory. |
| |
| | If you have an online business, you can't miss this event! | | If you’re a business owner, developer, marketer, or designer, Prepathon is the key to unlocking (and dominating) the busiest sales season of the year. | The details: Running from September 10-12, you’ll get access to a series of expert webinars, networking activities and valuable panel discussions. And it’s completely FREE. | Here are the main, can’t-miss themes for the event: | Website Optimization Revenue & Growth AI Innovation
| You’ll get personalized advice, actionable input (and prizes). | Register Now to secure your spot so you can crush sales season! |
| |
| | | AI-powered coding assistant Codeium raised $150 million in Series C funding at a $1.25 billion valuation. Want to become an AI consultant? Deep View readers get early access to The AI Consultancy Project. Request early access. *
| | Microsoft customers pause on Office AI assistant due to budgets, bugs (The Information). The underground world of black-market AI chatbots is thriving (Fast Company). Google searches are becoming a bigger target of cybercriminals with the rise of ‘malvertising’ (CNBC). OpenAI hits more than 1 million paid business users (Reuters). Bill Gates on AI (The Verge).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | | | | | |
| |
| C3.AI stock tanks despite earnings beat | | Source: C3.AI |
| Enterprise AI firm C3.AI reported earnings this week that were pretty much in line with analyst expectations. But the stock tanked anyway, starting off Thursday morning down roughly 20%. | It recovered slightly throughout the day, closing Thursday evening down 8%. | The performance: The company reported a loss of five cents per share for the quarter (compared to analyst expectations of a loss of 13 cents) and revenue of $87.2 million, slightly ahead of expectations. | The firm issued an outlook for the current quarter — of revenue between $88.6 million and $93.6 million — that remains in line with analyst expectations. But C3’s subscription revenue of $73.5 million came in well below analyst expectations of $79.5 million.
| Wedbush analyst Dan Ives called the results a “slight bump in the road,” though he remains positive on the stock due to its “strong pipeline across industries … as the AI Revolution gains more momentum.” He cut his price target to $30 from $40. | | At the same time, C3.AI is not reporting in a vacuum; shares of AI giant Nvidia have been struggling since it reported (great, but not good enough) earnings … investor patience regarding AI seems to be wearing thinner and thinner every day. |
| |
| US, Europe sign first legally-binding international treaty on AI | | Source: Council of Europe |
| The world’s first global, legally binding treaty on artificial intelligence has been signed by a number of international parties, including the U.S., the U.K., the European Union and Israel. | The treaty — the Framework Convention on artificial intelligence and human rights, democracy and the rule of law — was adopted in May after years of negotiations by 57 different countries. It is designed to “promote AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law.” | “We must ensure that the rise of AI upholds our standards, rather than undermining them,” Council of Europe Secretary General Marija Pejčinović Burić said in a statement. | Let’s look at the framework: First, the scope. The agreement says that parties will apply the convention to public authorities and private actors, adding that it does NOT apply to national security applications of AI, or the research/development of AI tech that hasn’t been deployed. | The general obligations laid out by the convention are simple: parties will adopt or maintain measures that protect human rights and democratic processes. More specifically, it calls for “measures” that would “respect human dignity and autonomy,” and ensure adequate transparency, oversight, responsibility, accountability, equality and non-discrimination. It also calls for the protection of personal, private data and the promotion of safe, reliable systems.
| Part of the framework requires information gathering, maintenance and availability in addition to safeguards. It calls for states to carry out “risk and impact assessments” and establish appropriate mitigation measures as a result of those assessments. It also established the “possibility” for states to ban certain applications of AI as they see fit. | The document is 12 pages long. You can read it here. | Any good? Legal expert Francesca Fanucci told Reuters that the convention had been “watered down” into a very broad set of principles. | "The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability," she said. One of the caveats she took issue with involves the national security exemptions.
| Once countries sign it, they need to ratify it; after ratification, it’ll take three months for the convention to start entering into effect. This also enters into a fraught, complex regulatory environment, with U.S. states offering the early days of a varied patchwork of regulation (that has met with plenty of resistance), even as federal efforts remain sluggish to the point of nonexistence and global efforts remain more than uneven. | | Reading through the document, I’m with Fanucci — for an international treaty, it is remarkably broad, laying out very generic principles, avoiding specifics and establishing a framework for each individual country to basically mitigate risks however they see fit. | It begs the question of why bother with an international treaty if the end result is ‘do whatever you want.’ | Further, aside from a quick, generic mention of the ‘environment,’ it makes no mention of the environmental impact of widespread, horizontally-integrated (and entirely opaque) AI models, whose specifics remain shrouded from public view. | I suppose something is better than nothing. But this ‘something’ doesn’t seem particularly wonderful. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | “Both were convincing, from shadows to light reflections, etc. The biggest clue for me was not only did the cabin look a bit off, it was so forward of the picture making it easy to see and the other was so far off it was harder to tell much about it. So, I conclude that the cabin was done so for it to be looked at harder, up close to conclude it was real.”
| Selected Image 2 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on taking a flight in a self-flying aircraft: | 45% of you said you wouldn’t get on board; 25% of you said you can’t wait to try it out. | Can’t wait: | | Nope, Nope, Nope: | “Autopilot is one thing, I would still like a pilot there in case there is a mechanical failure, which does happen, or an emergency which requires human intervention, just for supervision I guess. Machines run intensive care patients but you still need nurses to monitor them and intervene if necessary.”
| How much would you pay for ChatGPT? | |
|
|
|