| | Good morning. This is a big week for Big Tech earnings. Microsoft reports second-quarter earnings on Tuesday, Meta reports on Wednesday and Apple and Amazon report on Thursday. | Bank of America estimated that, led by this array of Magnificent Seven giants, more than a third of aggregate S&P 500 earnings will report this week. It will be a big week to watch as investors begin to look for returns against a field of high and growing investments in artificial intelligence. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | |
| |
| AI for Good: Machine learning for water quality | | Source: Unsplash |
| In a world with steadily deteriorating global water quality, machine learning offers the first step of a solution; it’s really good at predicting things. For a long time, though, the challenge with applying it to predict water quality was one of insufficient data. | But last year, scientists developed a machine learning model designed to predict river water quality. | The details: To overcome issues of data insufficiency, the model was fed on data gathered by remote sensors. | Part of the study’s purpose was to model the implications of climate change on water quality. The researchers said that the method developed here enables a thorough empirical analysis that solves the previously unaddressed gaps in water quality research.
| Why it matters: The researchers said that, given the link between climate extremes and water quality, “understanding of climate change implications for water quality is critical for the future of water quality management.” |
| |
| | Protect yourself from Data Breach | | We live in an online world with little privacy and tons of threats. Surfshark is the best way to ensure that you're protected. | | Surfshark performs real-time credit and ID alerts, plus they provide instant email data breach notifications and regular personal data security reports. | Protect yourself online with Surfshark. Get 80% off when you sign up today. |
| |
| Google DeepMind’s major math breakthrough | | Source: Google |
| Last week, Google DeepMind unveiled AlphaProof and AlphaGeometry 2, new AI models designed to overcome the problem-solving shortcomings of large language models to act as a more usable tool in the domain of mathematics. | Together, the two models solved four of the six problems from this year’s International Math Olympiad, the first time an AI model has achieved the same level as a silver medalist in the IMO. | The details: Human competitors are given two 4.5-hour sessions to submit answers. Google’s models solved some problems very quickly (within minutes, according to a statement) and others took much longer (three days, for instance). | The AlphaProof system works by training itself to prove or disprove mathematical problems in the formal language of Lean. It interfaces with a large language model to translate natural language problems into Lean, and was trained for the IMO by proving/disproving millions of problems. Mathematician Timothy Gowers said the main qualification for the breakthrough is the time the model took to solve problems, which suggests that AlphaFold hasn’t “solved mathematics.”
| “However, what it does is way beyond what a pure brute-force search would be capable of, so there is clearly something interesting going on when it operates,” he said. “We'll all have to watch this space.” | It has at the same time been criticized by researchers for providing “no technical details whatsoever but just bragging about non-verifiable results.” |
| |
| | | | | Earnings playbook: Your guide to the busiest week of the season, including Microsoft and Apple (CNBC). Paris Olympics broadcasters diverge on AI approach (Reuters). Pressured to relocate, Microsoft’s AI engineers in China must choose between homeland and career (Rest of World). Elon Musk forced to step in to resolve Tesla Cybertruck owner’s hellish experience (Fortune). AlphaProof, AlphaGeometry, ChatGPT, and why the future of AI is neurosymbolic (Marcus on AI).
| | | | | | |
| |
| Study: Synthetic data isn’t the breakthrough it’s cracked up to be | | Source: Created with AI by The Deep View |
| Data is a central ingredient in the construction of AI models. Because of this, developers have spent the past few years scraping the entire corpus of the internet in order to train progressively larger models. | Even amid academic warnings that we will soon run out of data to train these models, developers have begun turning to synthetic data for training purposes; that is, data generated by an LLM, used to train an LLM. But a new study published in Nature found that this is not a tenable approach. | The details: The study found that “indiscriminate use of model-generated content in training causes irreversible defects in the resulting models.” | The researchers dubbed this scenario “model collapse,” where, over time, models “forget the true underlying data distribution” The researchers said that this process is “inevitable.”
| AI researcher and cognitive scientist Gary Marcus said in response that the “only way we will move significantly forward is to develop new 𝙖𝙧𝙘𝙝𝙞𝙩𝙚𝙘𝙩𝙪𝙧𝙚𝙨 — likely neurosymbolic — that are less tied to the idiosyncrasies of specific training set.” | | Invest smarter with Public and AI | | If you're looking to leverage AI in your investment strategy, you need to check out Public. | The all-in-one investing platform allows you to build a portfolio of stocks, options, bonds, crypto and more, all while incorporating the latest AI technology — for high-powered performance analysis — to help you achieve your investment goals. | Join Public, and build your primary portfolio with AI-powered insights and analysis. |
| |
| Last week this morning: Apple’s agreement with the White House and everything else we missed | | Source: Unsplash |
| It has been a busy week. | So busy, in fact, that we weren’t able to cover everything important that happened over the past week. So, this morning, we’re going to take a brief look at, well, everything we missed. | Apple’s voluntary agreement with the White House | What happened: The White House on Friday published a statement detailing the progress federal agencies have made in the nine months since President Joe Biden issued his executive order on artificial intelligence. | In this statement, the White House said that Apple “has signed onto the voluntary commitments” laid out in the order, “further cementing these commitments as cornerstones of responsible AI innovation.” These principles largely aim to ensure that AI models are safe, secure and non-discriminatory; they call for new levels of model transparency, requiring developers to share the results of safety tests with governments and academia.
| The FTC weighs in on open-weights models | Just a few weeks before Meta announced the release of its latest generative AI model, the U.S. Federal Trade Commission (FTC) tackled the issue of open-versus-closed AI in a blog post. | The details: The FTC said that, like open-source software, open-weights models have the “potential to be a positive force for innovation and competition.” | The agency noted, however, that “open” models do pose a few risks. One of these centers around granting bad actors access to powerful systems. Mark Zuckerberg acknowledged these risks in an open-source manifesto, in which he said that it would not be difficult for geopolitical adversaries to steal AI models, closed or open, so we might as well make them open. The FTC added that different — and subject-to-change — definitions of “open” could hinder competition down the line. For instance, companies might start open to gain market share, only to close off later on in order to “gain dominance and lock out rivals.”
| The UK has an AI action plan | Jumping across the sea to our friends in London, the U.K. on Friday announced the establishment of a new AI action plan — which will be led by tech entrepreneur Matt Clifford. | The details: Its purpose is to “identify how AI can drive economic growth and deliver better outcomes for people across the country.” | Science Secretary Peter Kyle said: “We’re putting AI at the heart of the government’s agenda to boost growth and improve our public services.” Andrew Strait, the associate director of the Ada Lovelace Institute, said in response: “I worry it risks repeating the silver bullet thinking of the last few years that puts 'adoption of AI' as the end goal without understanding if AI is fit for purpose, works, creates better outcomes, or precludes better non-AI solutions.”
| US & EU joint statement on genAI competition | In our last bit of government-related news, the FTC, U.S. Department of Justice, U.K. Competition and Markets Authority and European Commission recently published a joint statement on the state of competition in the world of generative AI. | The details: The statement recognized a number of risks to competition in the sector, mainly that the key “ingredients” necessary for AI — chips, compute, data and talent — “potentially put a small number of companies in a position to exploit existing or emerging bottlenecks across the AI stack and to have outsized influence over the future development of these tools.” | The organizations said that the best way to combat this is to enforce fair dealing, interoperability and choice. They added that “AI can turbocharge deceptive and unfair practices that harm consumers,” saying also that they “will also be vigilant of any consumer protection threats that may derive from the use and application of AI.”
| Other stuff to know: Twitter very quietly default-enrolled its users into a new option that allows the company to train its AI model, Grok, on user posts. | | And last up, JPMorgan, according to an internal memo seen by the FT, has been rolling out an internal large language model chatbot that it is billing as a “research assistant” for employees. Around 50,000 employees now have access to the model. | This comes in the face of rather glaring LLM weaknesses, in that their propensity for inaccurate output and hallucination makes them a questionable choice for a “research assistant.” | | Ok, that was a lot. | Here’s hoping this week is a little calmer. | But somehow, I doubt it. | Happy Monday. | | | Which image is real? | | | | | |
| |
| A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Your view on AI productivity at work: | More than half of you said that using AI at work has made you way more productive. | Only around a third of you said that using AI at work has made you less productive. | It’s made me way more productive: | “I use AI every day in my work. From document creation, social media posts, advertising, design I am using and learning about AI daily. I am more productive than ever before. AI is also a teacher in whatever subject I need to learn. Honestly, I can no longer see things going back to before AI.”
| Do you trust Apple to develop AI responsibly now that it has agreed to the White House's voluntary commitments? | | *Public disclosure: All investing involves the risk of loss, including loss of principal. Brokerage services for US-listed, registered securities, options and bonds in a self-directed account are offered by Public Investing, Inc., member FINRA & SIPC. Cryptocurrency trading services are offered by Bakkt Crypto Solutions, LLC (NMLS ID 1828849), which is licensed to engage in virtual currency business activity by the NYSDFS. Cryptocurrency is highly speculative, involves a high degree of risk, and has the potential for loss of the entire amount of an investment. Cryptocurrency holdings are not protected by the FDIC or SIPC. | Alpha is an experiment brought to you by Public Holdings, Inc. (“Public”). Alpha is an AI research tool powered by GPT-4, a generative large language model. Alpha is experimental technology and may give inaccurate or inappropriate responses. Output from Alpha should not be construed as investment research or recommendations, and should not serve as the basis for any investment decision. All Alpha output is provided “as is.” Public makes no representations or warranties with respect to the accuracy, completeness, quality, timeliness, or any other characteristic of such output. Your use of Alpha output is at your sole risk. Please independently evaluate and verify the accuracy of any such output for your own use case. |
|
|
|