David McNew / Getty

Elon Musk has a predilection for grandeur. The billionaire tech provocateur made his fortune as a founder of the revolutionary online-payments company PayPal, and since then, he has announced his intention to revolutionize cars, trains, space travel, intercontinental flight, and city driving. SpaceX, his aerospace company, has begun work on the infrastructure to beam internet access down to Earth from satellites in orbit around the planet. And this week, Musk shifted his public ambitions to his next target: the human brain.

Neuralink, a neurotechnology company owned by Musk, crept out of the corporate shadows Tuesday with a live-stream that included one of the founder’s signature big promises: The company is developing a device to implant inside the brain that supposedly will allow people to control computers and other devices with their mind. At the announcement, Musk said the company is on track to begin testing the implants in human patients as soon as next year.

In one sense, what Musk described during Neuralink’s debut sounds dazzling: threads, thinner than human hair, robotically inserted into the brain via skull holes bored by a laser that does not yet exist. The threads, according to Neuralink’s leadership, will be less likely to cause internal damage and able to transmit far more information than rigid implants currently available that allow people with physical disabilities to interact with computers. Once perfected, Musk said in his announcement, the host brain would “achieve a symbiosis with artificial intelligence.”

If it delivers on its promise, this technology could change the lives of people with paralysis and other physical disabilities—and, if the most sweeping claims of Neuralink’s public debut are realized, it could radically transform what it even means to be human. But alongside the device’s promise lies its potential to realize a host of modern anxieties about technology’s ever more inextricable role in the way humans perceive and interact with the world. The foundation of medicine is a slow, steady approach to innovation, which is at odds with Silicon Valley’s edict to move fast and break things. In neurotechnology, the thing that might get broken is people’s minds.

Because of its reliance on buzz and venture capital, Silicon Valley both requires and rewards big swings—“moonshots,” in the industry’s vernacular. This is an environment in which Musk has thrived, and although he’s a controversial figure, he’s clearly had victories. In finding ways to reuse rocket boosters, SpaceX has substantially reduced the cost of space travel. The popularity of Tesla, Musk’s electric-car company, offers a proof of concept for the automobile’s more sustainable future. But big swings come at a considerable risk of big misses. Tesla has problems with production schedules and talent retention. Hyperloop, Musk’s tunnel-car concept, promised high-speed frictionless transportation but has only delivered a sedan on a sled.

Because many of Musk’s ventures are still in their early stages, the consequences of bumps in the road have largely been confined to his own employees and some discouraging rocket explosions. But as Musk’s vision has grown, so too have the potential impacts on the population at large. Self-driving Teslas, for example, could endanger drivers and pedestrians if not perfectly calibrated. When Musk or anyone else starts talking about opening people’s skulls, the stakes skyrocket. Musk, in Neuralink’s debut, readily admitted that the company wasn’t ready to show the public much of anything, but that he and the company’s president, Max Hodak, were stepping forward in hopes of recruiting new employees. Musk also revealed something Hodak reportedly wasn’t expecting to tell the world: Neuralink’s devices have already allowed at least one monkey to mentally interact with a computer.

Although there is a technological basis for the hardware that Musk and Neuralink are promising (in 2006, a brain implant allowed Matthew Nagle to play ping-pong with his mind), neuroscience, like much of medical innovation, tends to be incremental and careful. The field’s pace has its critics, but a certain amount of caution is warranted because of the huge ethical and safety concerns in testing novel treatments on human subjects. Coming from an industry still grappling with the implosion of the blood-testing start-up Theranos, a promise to transform people’s health using fast, world-changing technology might inspire as much dread as it does hope.

The Food and Drug Administration and Neuralink’s academic partners at schools like the University of California at Davis will have a moderating force on the tech industry’s penchant for sprinting forward, but those kinds of checks and balances have failed spectacularly in the recent past. Theranos reached a valuation of $10 billion and had contracts with Walgreens and the American military before reports of fraud surfaced. The very structure of the industry, with its tendency for glitz and hype to overshadow hard science, can put people at risk. When it comes to health, it might be wiser not to tell people you’re going to change their lives until you can actually do it.

Beyond whether or not Neuralink can serve up what it promises (scientists seem more optimistic about the hardware itself than Musk’s grand plans for what it will do), there’s also the question of whether people even want it. According to Musk’s announcement, helping people with paralysis is just the first step. He envisions that pretty much everyone will one day get these types of implants in order to stay competitive with artificial-intelligence technology, eventually achieving telepathic communication with one another and their possessions.

There’s already considerable concern that the rapid increase in interconnectedness that humans have experienced as a result of smartphones and social media is overwhelming and potentially harmful to the psyche. If humans are going to go even further, there’s good reason to be wary of any of Silicon Valley’s existing overlords leading the way.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.