An anonymous reader quotes a report from TechCrunch: In late September, Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous -- meaning an AI algorithm would make the final decision to kill someone. "Congress doesn't want that," the defense tech founder told TechCrunch. "No one wants that." But Tseng spoke too soon. Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons -- or at least a heavy skepticism of arguments against them. The U.S.'s adversaries "use phrases that sound really good in a sound bite: Well, can't you agree that a robot should never be able to decide who lives and dies?" Luckey said during a talk earlier this month at Pepperdine University. "And my point to them is, where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?"
When asked for further comment, Shannon Prior, a spokesperson for Anduril said that Luckey didn't mean that robots should be programmed to kill people on their own, just that he was concerned about "bad people using bad AI." In the past, Silicon Valley has erred on the side of caution. Take it from Luckey's cofounder, Trae Stephens. "I think the technologies that we're building are making it possible for humans to make the right decisions about these things," he told Kara Swisher last year. "So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously." The Anduril spokesperson denied any dissonance between Luckey (pictured above) and Stephens' perspectives, and said that Stephens didn't mean that a human should always make the call, but just that someone is accountable.
Last month, Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons. At an event hosted by the think tank Hudson Institute, Lonsdale expressed frustration that this question is being framed as a yes-or-no at all. He instead presented a hypothetical where China has embraced AI weapons, but the U.S. has to "press the button every time it fires." He encouraged policymakers to embrace a more flexible approach to how much AI is in weapons. "You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I'm a staffer who's never played this game before," he said. "I could destroy us in the battle."
When TC asked Lonsdale for further comment, he emphasized that defense tech companies shouldn't be the ones setting the agenda on lethal AI. "The key context to what I was saying is that our companies don't make the policy, and don't want to make the policy: it's the job of elected officials to make the policy," he said. "But they do need to educate themselves on the nuance to do a good job." He also reiterated a willingness to consider more autonomy in weapons. "It's not a binary as you suggest -- 'fully autonomous or not' isn't the correct policy question. There's a sophisticated dial along a few different dimensions for what you might have a soldier do and what you have the weapons system do," he said. "Before policymakers put these rules in place and decide where the dials need to be set in what circumstance, they need to learn the game and learn what the bad guys might be doing, and what's necessary to win with American lives on the line." [...] "For many in Silicon Valley and D.C., the biggest fear is that China or Russia rolls out fully autonomous weapons first, forcing the U.S.'s hand," reports TechCrunch. "At the Hudson Institute event, Lonsdale said that the tech sector needs to take it upon itself to 'teach the Navy, teach the DoD, teach Congress' about the potential of AI to 'hopefully get us ahead of China.' Lonsdale's and Luckey's affiliated companies are working on getting Congress to listen to them. Anduril and Palantir have cumulatively spent over $4 million in lobbying this year, according to OpenSecrets."
Read more of this story at Slashdot.