| | | Good morning. Markets had a much, much better day yesterday, all in light of the Fed’s decision to hold rates steady amid an environment of increasing economic uncertainty. | And tech had a much better day than it’s had in a while; $NVDA ( ▲ 1.81% ), for one, clawed back some of its losses from the prior session, on an affirmation from Jensen Huang that the tariffs aren’t a concern. | Now, the trick is for this to last. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🔬 AI for Good: Biodegradable plastic 🚘 Tesla moves toward the possibility of a ride-hail service 👁️🗨️ Nvidia’s pinned its hopes on scaling — Here’s what researchers think
|
| |
| AI for Good: Biodegradable plastic |  | Source: Unsplash |
| That we are living through a plastic crisis is obvious. Cheap and easy to produce, plastic has been proliferating for decades. The problem is it doesn’t break down. | That has led to plastic pollution in larger forms (think of all the water bottles you’ve seen in the woods), as well as smaller forms (microplastics that get in our food, and then in us). | While non-profits, companies and governments have been working to clean up all that plastic pollution, its production hasn’t stopped. So researchers have been looking into equally flexible, cheap alternatives. And some have been using machine learning to do it. | What happened: Researchers at TNO, an independent research organization in the Netherlands, recently developed a machine learning model called PolySCOUT that, in short, is being used to develop biodegradable plastic. | Plastics are made up of polymers, which refer simply to very large molecules. PolySCOUT was designed to predict polymers; this means that, by simply entering certain chemical constraints — biodegradability, plastic-like properties — the model can suggest materials that fit that description.
| TNO is already using the model as part of its collaboration with textile manufacturer Senbis, which is on a mission to find a biodegradable synthetic textile fiber. | Why it matters: Without the model, searching for the right polymers is a hunt for needles in haystacks; with the model, the first step of the process is massively accelerated. Milad Golkaram, who led the team behind PolySCOUT, said that the model “can be huge for society. You can include all these variables and make producing and using biodegradable plastic a lot more attractive.” |
| |
| | What if they built houses like cars? | | Did you know that car factories, like Ford, can output one car per minute? That’s right, there’s an assembly line for cars but why hasn’t anyone done that for homes? Well, it turns out someone is trying to… | BOXABL is the home construction company bringing assembly line automation to the home industry. With their patented technology and 53 patent filings, BOXABL believes they have the potential to disrupt a massive and outdated trillion-dollar building construction market. | Most houses take seven months to complete. BOXABL can put one out of the assembly line every four hours, including electrical, HVAC, and plumbing! Now, the company is raising funds and has made shares available to the public—but the round is closing soon on StartEngine! | Currently, shares are being offered at $0.80 per share. With a $1,000 minimum investment, you can join 40,000 investors to help solve the housing crisis. | Time is running out to invest in BOXABL on StartEngine. Invest Now! |
| |
| Tesla moves toward the possibility of a ride-hail service |  | Source: Unsplash |
| The news: Tesla was granted a permit from the California Public Utilities Commission (CPUC) that will enable the company to operate a transportation service in the state, the first stage of a very long regulatory process that might end with a functioning Tesla robotaxi service. | The carmaker applied for the permit in November. | The catch: But the permit Tesla was granted doesn’t have anything to do with autonomy. Tesla was granted a Transportation Charter Permit (TCP), which comes with certain qualifications, namely, that the cars will be owned by Tesla, and operated by its employees. | According to the CPUC, such permits are often used for round-trip sightseeing, or transporting workers to and from a farm. To operate like Uber or Lyft, Tesla would need a different permit (transportation network company), and to operate like Waymo or the late Cruise, Tesla would need to separately apply to both the CPUC and the California Department of Motor Vehicles for permits allowing the testing or passenger operation specifically of autonomous, driverless vehicles.
| A step farther: Tesla is still most definitely planning on attempting to get into the robotaxi/ride-hailing business, but, at least in California — which has become something of a breeding ground for that business — it is quite unclear just how far away the automaker is from even being allowed to begin testing it. | This all comes as Tesla’s misnamed full-self driving (FSD) software — which requires the hands-on, eyes-on attention of its driver — lacks the kind of key redundancy layers that make systems like Waymo work as well as they do. Waymo vehicles come laden down with a massive sensor array, complete with radar and lidar, in addition to cameras and neural networks. And even with that, Waymos are far from perfect; a Californian last year got stuck in a Waymo that, instead of taking him to the airport, drove around in circles. | Teslas, meanwhile, operate only with neural networks, which are susceptible to unpredictable mistakes. Tesla is currently facing more than a dozen lawsuits, in addition to several federal investigations, over its self-driving and driver-assist software. | Elon Musk has said Tesla will launch a robotaxi service in Texas this year, which has a hands-off approach when it comes to the regulation of autonomous vehicles. |
| |
| | Help customers seamlessly integrate your tools and services on HubSpot | | With flexible UI and extensibility tools, HubSpot’s developer platform allows you to build apps that help teams unify their tech stack across our platforms. | From listing your app in the HubSpot Marketplace to building custom solutions, there’s something for everyone to grow better. | Learn More |
| |
| | | | | Dow closes nearly 400 points higher after Fed says two rate cuts are still in the cards for 2025 (CNBC). Academics accuse AI startups of co-opting peer review for publicity (TechCrunch). Amazon is blundering into an AI copyright nightmare (The Verge). OpenAI’s ‘agents’ pose a risk to DoorDash, other consumer apps (The Information). European Union lays out how Apple must open its tech up to competitors under bloc’s digital rules (The AP).
|
| |
| Nvidia’s pinned its hopes on scaling — Here’s what the researchers think |  | Source: Nvidia |
| “Almost the entire world got it wrong,” Nvidia Chief Jensen Huang said Tuesday, referring to the so-called scaling laws of AI. “The scaling law of AI is more resilient and, in fact, hyper-accelerated. The amount of computation we need, at this point … is easily 100 times more than what we thought we needed at this time last year.” | For Nvidia, scale means more compute, so more business. | But the impetus behind that scale, for some in the industry, is simple: that current systems can be scaled into some sort of hypothetical artificial general intelligence (AGI), an idea shared both by Anthropic’s Dario Amodei and OpenAI’s Sam Altman. | But here’s the thing about scaling laws: they are an observation, not a physical law. | The general idea is that if you increase the size of a model (parameters), the size of its training set (data) and the amount of computational resources (compute), you will also increase model performance. Largely, this observation has held up, leading to such phrases as: ‘scale is all you need.’ | But a scientific ‘law’ is a description — usually mathematical — of a specific observation. It doesn’t deal with hows or whys, it just deals with verifiable fact. The most prominent example of this is probably Newton’s universal law of gravitation (remember the apple falling from the tree?). These companies, whose businesses are pretty reliant on scaling laws working, have hypothesized that scaling will continue to work. But there’s no evidence that it will, especially considering that the thing they’re chasing — AGI — has no grounding in science.
| Here’s what the researchers think: A recent report by the Association for the Advancement of Artificial Intelligence (AAAI) on the future of AI research stated that, although neural networks and large language models (LLMs) have notched significant improvements over the years, they are missing a number of key capabilities to achieve something that might be defined as a general intelligence. | Mainly, current systems struggle with generalizing outside of their training set, as well as with causal inference, counterfactual reasoning, memory, recall and long-term planning and reasoning. There are limits, according to the report, to the transformer architecture that undergirds these models. | In its survey of 475 researchers, roughly 76% find it “unlikely” or “very unlikely” that scaling current systems will lead to something resembling AGI. 80% of researchers polled added that, due to overwhelming hype, the current perception of AI capabilities does not match the reality of the research; 90% agreed that this hype is hindering research.
| In a later poll of 176 researchers, only 16% said they believe that neural networks alone are enough to achieve human-level AI agents. Researchers want to see more study of non-neural approaches to AI, or combinations of neural networks with other systems. | “There is a risk that the current convergence of the field towards focusing on neural approaches could impede innovation,” according to the report. | AGI has been at the root of a few different forms of industry hype, lately. It has been invoked both as a cause for fear by some companies and researchers, and also as a pathway to a utopia on Earth, a world with instant cures for all diseases (plus climate change) and no more jobs (which wouldn’t be a problem, somehow). It’s also increasingly being leveraged by companies as a means of strengthening their political positions. | But the idea that artificial systems might someday, somehow, match or exceed the capabilities or intellect of the human mind — beyond being almost impossible to empirically measure or prove — has been fiercely debated. This conversation has not been helped along by the lack of a universal, scientific definition for AGI. It’s almost impossible, after all, to achieve something that you can’t define. | Further, it’s a challenging standpoint to engage in, considering how little researchers understand the brain, consciousness, intelligence or sentience, or how those things all relate (or don’t) to each other. And that’s not to mention the enormous resource intensity that would be required by systems that are even close to approaching the complexity of an organic mind, a level of energy that some researchers have said simply doesn’t exist.
| “Discussions of AGI, particularly in the popular press, have fueled speculation that sentience or consciousness could be a characteristic of AGI system,” according to the report. “AI researchers generally steer clear of such speculations, pointing out that the analysis and prediction of behavior is independent of attributions of sentience … AGI is not a formally defined concept, nor is there any agreed test for its achievement.” | | There is a wholly separate conversation here that we should all be having that ought to answer the “should we” question of AGI. | But considering the ways in which AGI is leveraged as a means of ramping hype by companies who are understandably incentivized to sell products, dealing with the challenging realities of that hypothetical is even more important. | I have said before that the big indication to me of a system approaching AGI would be something trained on a very small dataset, that is able to generalize reliably — so, no hallucinations of the type that LLMs make — outside of that dataset. | But when it comes to the discourse here, which often goes far beyond “AGI” and deep into ever more fictitious realms of superintelligence and shudder “digital god,” I find it’s useful to deal in something that seems rather analogous. | You know, time travel. | If Elon Musk said he was building a time-traveling Tesla, I think you’d find very few people who would express optimism that he’ll succeed. That said, if his attempt to do so involved driving a Tesla at 800 miles per hour down public roads, I imagine that the government would heavily regulate that effort, since the approach to simply build a time-traveling machine could cause plenty of harm on its own. | That’s the race to AGI. | The effort to get there is causing plenty of harm today — environmentally, legally, societally and psychologically. And so if the effort to build something that’s un-buildable is harmful, that effort ought to be controlled and restricted. | The companies — as we saw with their policy proposals — don’t want to be restricted. | And governments, many of which have bought the hype, seem disinterested in attempting to restrict them. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 1 (Left): | “A ship under sail almost always has crew at the rigging. Image 2 has no people on board. Weird. However, image 1 almost had me fooled by the wind direction, which seems to come from landside. Also weird. But I guessed right.”
|
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on having a robot: | Half of you don’t see what it would do that’s worth paying for. | 8% would pay anything to have one of those robots; 12% would pay $20,000; and 18% would pay $10,000. | Not worth it: | | Not worth it: | | If you live in Texas, would you take a ride in a hypothetical Tesla robotaxi? | | If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | Boxabl Disclosure: *This is a paid advertisement for Boxabl’s Regulation A offering. Please read the offering circular here. This is a message from Boxabl |
|
|
|