| | Good morning. Drones are still flying over the East Coast, and no one really knows what they are. | I haven’t seen any (have you?). The government continues assuring people that there is no evidence the drones pose a threat to public safety, but we still don’t know what exactly they are, or who’s operating them. | The FBI suggested most of the sightings are a case of “mistaken identity,” with people misidentifying small, legal and manned aircraft as drones of mysterious origins. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🌎 AI for Good: NASA’s expanded geospatial model 💻 Microsoft’s new model: Small, but mighty 👁️🗨️ Meta rolls out a different kind of AI model 🏛️ UK considers upending copyright law in favor of AI companies
|
| |
| AI for Good: NASA’s expanded geospatial model | | Source: NASA |
| In August of 2023, NASA — in collaboration with IBM — released the Prithvia Geospatial foundation model, an openly accessible AI model trained on an enormous quantity of geographical data and designed to enable targeted Earth observation. | What happened: This month, NASA released an updated version of the model that expands its capabilities and its potential applications. | Several of the new applications here involve the reconstruction — by the AI model — of gaps in satellite imagery caused by cloud cover, enabling a broader view of the Earth’s surface over time. This could in turn aid environmental monitoring efforts, in addition to agricultural planning, monitoring and resource management at scale.
| Why it matters: “We’ve embedded NASA’s scientific expertise directly into these foundation models, enabling them to quickly translate petabytes of data into actionable insights,” Kevin Murphy, NASA’s chief science data officer, said. “It’s like having a powerful assistant that leverages NASA’s knowledge to help make faster, more informed decisions, leading to economic and societal benefits.” |
| |
| | Tired of Battling Spam Calls on Your Phone? Here's How to Make Them Disappear. | | Every day, your personal data, including your phone number, is sold to the highest bidder by data brokers. This leads to annoying robocalls from random companies and, worse, makes you vulnerable to scammers. | Meet Incogni: your solution against robocalls. It actively removes your personal data from the web, fighting data brokers and protecting your privacy. Unlike other services, Incogni targets all data brokers, including those elusive People Search Sites. | Put an end to those never-ending robocalls and email spam on your iPhone now. | Incogni protects you from identity theft, spam calls, increased health insurance rates, and more. | Only for The Deep View readers: Get a 58% discount on Incogni's annual plan using code: DEEPVIEW |
| |
| Microsoft’s new model: Small, but mighty | | Source: Microsoft |
| Even as many developers remain convinced of the endless, exponential power afforded them by steadily increasing the size of already enormous Large Language Models (LLMs), some have been paying increasing attention to the idea of Small Language Models (SLMs). | Functionally, there’s no difference between the two other than size; where an LLM might have trillions of parameters (GPT-4 is rumored to have 1.8 trillion), an SLM might have only billions. The idea is to optimize the training and post-training process to design a more efficient model — which makes it cheaper to use and more energy efficient — that is generally as capable as an LLM. | What happened: Microsoft last week released Phi-4, the latest in its series of small Phi models. Microsoft markets the Phi family as the “best quality for the cost,” adding that it’s a “smaller, less compute-intensive model for generative AI solutions.” | The details: This latest version of Phi-4 — a 14 billion parameter model — is either comparable with, or outperforms, similar and larger models on certain benchmarks, according to Microsoft. The research presenting Phi-4 has not been peer-reviewed, and so has not been independently verified. | But according to Microsoft, Phi-4 is particularly good at math problems, outperforming models such as GPT-4o and Gemini 1.5 Pro. However, Microsoft wrote in a paper that Phi-4 struggles with factual knowledge and detailed instruction-following, two weaknesses attributed to its size. Microsoft did not detail the data used to train the model, or the energy consumption and carbon emissions of either training or deploying the system. It did, however, detail the “novel” process leveraged to design a powerful, small model.
| This involved three main steps; a careful curation and filtering of high-quality data, — “including web content, licensed books and code repositories” — a refined post-training process and a synthetic data pipeline. The researchers used the organic data as “seeds” for the synthetic data, inputting it through a generative AI system to transfigure it into synthetic information far better suited to the internal structure of a language model. |
| |
| | ONLY 2 DAYS LEFT: This Startup is Giving AI The Upgrade it Needs | | The data center market was valued at $194.8 billion in 2022, and is expected to grow at a CAGR of 10.9% through 2030. With generative AI expanding rapidly, over 90 zettabytes of data will need to be created and stored in 2025, bringing issues of data storage into the spotlight. That's why Atombeam’s tech is poised to disrupt how machines communicate and store data, reducing data size by up to 75%. | Atombeam's patented (AI-powered) Neurpac software can make networks up to 4x faster, while also making them more secure -- helping clients avoid billions in expensive hardware upgrades. And this applies to generative AI, too. Atombeam's tech allows generative AI applications to run faster (while consuming less power), something that could dramatically supercharge AI efficiency, transforming the whole landscape.
| Plenty of industry players have taken note of Atombeam's approach; in addition to partnering with NVIDIA, Intel, and Ericsson, the company has contracts in place with the U.S. Air Force and Space Force to develop Neurpac for use in military satellites. | Atombeam’s raise just hit $12M - but this round is closing in 48 Hours. Become an Atombeam shareholder for just $8/share before this round closes. |
| |
| | | | | China sets up AI standards committee as global tech race intensifies (Reuters). Elon Musk attacks SEC as he shares a letter saying it is probing Neuralink (BI). Officials downplay NJ drone concerns as online suspicion builds (Politico). TSMC says first advanced U.S. chip fab ‘dang near back’ on schedule. Here’s an inside look (CNBC). South Korea faces power vacuum amid deepening political crisis (Semafor).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | |
| |
| Meta rolls out a different kind of AI model | | Source: Meta |
| Meta last week launched two new AI models. In a departure from the regular slew of model announcements and launches, these aren’t your normal generative AI models. | The details: The first, called Meta Motivo, is a foundation model designed to achieve more realistic movements of virtual avatars. According to Meta, it paves the way toward a more realistic Metaverse. | Trained on an unlabeled dataset of human movements, the model is able to learn human-like behaviors, something that solves several body-control problems, including motion tracking and pose-reaching. The model was trained on the AMASS dataset; Meta did not include information regarding energy use or carbon emissions, though the paper did mention the high cost of compute. The second model, Meta Video Seal, enables users to add invisible watermarks to videos that can later be used to determine content provenance and authenticity. Meta, which said the solution is resilient to video editing efforts, positioned this as a potential solution to the deluge of AI-generated deepfakes that has flooded the internet in recent months. Similar to Motivo, the energy cost and carbon emissions around the training and deployment of this model were not mentioned in the paper.
| You can try a demo of both models here. | These latest releases come as Meta has steadily increased its AI investments; it is expected to spend as much as $40 billion in capital expenditures this year alone. | Meta said it is openly releasing the code in addition to the models themselves, making these two models more open than other Meta releases, such as Llama. | None of the research here has been peer-reviewed or independently verified. |
| |
| UK considers upending copyright law in favor of AI companies | | Source: Unsplash |
| U.K. government officials, according to reporting from the Sunday Times and Politico, are planning to launch a consultation this week regarding a series of proposals intended to reshape copyright law in the country. | The details: According to the reports, the proposal at hand here would legally allow AI developers to train their models on copyrighted materials. | A part of the proposal involves a new “personality right,” which would give celebrities legal protections against the use of generative AI tools to mimic their likeness without permission. It would also enshrine an “opt-out” framework for the legal training of AI models on copyrighted data, meaning artists would have to specifically opt out in order to best protect their data. Seemingly, it would not require developers to license training data, meaning those artists who would choose to opt in, or forget to opt-out in time, would receive no compensation for the inclusion of their content in the training sets behind these highly commercialized models.
| Artists, including Paul McCartney, have come out against the proposal, calling instead for an ‘opt-in’ regime. The proposal additionally prompted a letter from the Copyright Alliance, a U.S. organization, that warned Peter Kyle, secretary of state for science, innovation and technology, and Culture Secretary Lisa Nandy, away from the proposal, according to the FT. | “Any UK government action that degrades copyright — by creating an exception for AI use, for example — creates a legal environment that discourages U.K. and U.S. creators and rights holders from participating and investing in creative endeavors within the United Kingdom,” the letter reads. | The government responded by saying no decision had yet been made. | The backlash: Ed Newton-Rex, CEO of Fairly Trained, said the proposals would be “disastrous for creators (and) the creative industries.” | “Generative AI competes with its training data. This would allow AI companies to exploit people's work to build highly scalable competitors to them,” he wrote. “Opt-out doesn't work,” Newton-Rex recently told the Culture, Media and Sport Committee. “There is no way of successfully opting your work out of training given that there are legal copies of your work all over the internet, and you have no control over these legal copies. Opt-out is really an illusion for rights-holders. It gives rights-holders no control.”
| The landscape: A U.K. parliamentary committee in February published a 95-page report on AI that specifically included mention of the copyright challenges posed by the tech. | "LLMs may offer immense value to society. But that does not warrant the violation of copyright law or its underpinning principles," the report reads. "We do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process."
| OpenAI, shortly before the publication of said report, told the House of Lords that it would be "impossible to train today's leading AI models without using copyrighted materials." | The copyright infringement lawsuits, meanwhile, filed against AI developers have become numerous and have yet to be either dismissed or otherwise resolved. The AI developers have stuck hard to their perspective that it is morally and legally fair for them to train their models using copyrighted material without the permission or compensation of the original creators; the creators have largely, and vehemently, disagreed, leading to battles in U.S. courts over that fundamental issue of whether training an AI model is legally protected as “fair use” in the U.S. | | This will be an interesting one to watch. However things land, we’re looking at a pivotal point in the U.K., one that will either enshrine protections for creators or give tech companies free rein, something that might well make London a desirable location for AI development. | I’ll just add, as Newton-Rex has said in the past, that requiring licensing of copyrighted material does not prevent society from reaping any benefits that generative AI has to offer; health models, weather models, climate models, geospatial models — all of which offer an inkling of the true positive promise of the technology — don’t need copyrighted data to function, and they don’t threaten to disrupt entire industries. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 1 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on data center air pollution: | While 44% of you said you don’t live near a data center, 22% do, and they said that the air pollution is terrible. | 10% of you said you live near a data center, but you haven’t noticed any air pollution from it. | Seems fine: | | I do not: | | What do you think about opt-out? | |
|
|
|