Connect with us

Artificial Intelligence

AI Treaties

Jacob Rozansky

Published

on

We may earn a small commission when you click or purchase an item using a link on this website.


We live in an era where autonomous weapons, those that can kill without a human pulling the trigger, are in reach. Many militaries already employ artificial intelligence to aid in missions. Attacks orchestrated with help from smart machines are inevitable. International treaties against the use of such a weapon might be necessary.

This type of high-tech weaponry will make uneven conflicts even more stratified and strengthen the position of advanced fighting forces. The same technology could be used to resist powerful regimes, but those with the resources will likely have the upper hand.

Similar to the nuclear arms race of the 20th century, the race for more advanced weaponry is a potential threat to countless citizens of the planet. Conversely, this modern arms race will be much cheaper to partake in than the Manhattan Project. A reasonably capable killer robot is many times cheaper than weapons-grade uranium.

With the advent of nuclear weapons also came the promise of nuclear energy. In the realm of AI, many of the frightening technologies that would make AI weapons possible are already in use for benign purposes.

Despite nuclear treaties, thousands of warheads idle today. So it’s unclear that an armistice would do the intended job. Consider that AI weapons are exquisitely targeted, unlike nuclear weapons. That makes them far more attractive in actual warfare than as bargaining chips.

Outside of the domain of weapons, there is some agreement that AI should be limited on a governmental scale. Ideals like transparency and accountability, human-centered design, and preparing for a transformed labor force have been broadly embraced by dozens of nations. That being said, guidelines from the office of economic coordination and development –one of the authorities expounding AI principles– are not enforceable. The more nebulous principles such as accountability are readily accepted by many. More contentious principles like what to do about labor markets that grow obsolete in the face of large-scale automation inspire less agreement. The nations and companies of the world have very different views about their own responsibility in these domains and it’s not clear any agreement on such a thing is beneficial for shareholders or politicians.

A code of ethics for AI growth can be useful. An international body working to identify clear red lines could better reward the bravery of whistleblowers. Well-defined values may strengthen the resolve of concerned citizens.

The field of AI remains in its infancy, especially large-scale use of the tech in recent years. Creating laws or guidelines for such a dynamic technology will indeed be awkward. The very idea of what artificial intelligence means may undergo drastic changes in the coming decades. Building a framework for laws today could prove woefully inadequate for tomorrow’s technological challenges.

Some industry leaders worry that any regulation could impede innovation. However, implementing many of the more popular principles would cost far less than reputational damage caused by rollouts of unsafe or invasive programs.

If these initiatives prove successful, future technology could be more democratic, more transparent, and more human than if left unchecked.

Jacob writes about AI, automation, and tech in politics. He’s in the business of turning daunting topics into digestible tidbits.

Continue Reading
Advertisement
Click to comment
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Apple

Bloomberg Says Apple and Microsoft Are Beefing Again

Colin Edge

Published

on

In recent years, we had reason to believe Apple and Microsoft were mending fences. But according to new analysis from Bloomberg, the tech giants may be back at it like Godzilla vs. Kong.

Claims of an improved relationship between the two companies were not unfounded: The Microsoft Office app became available on iPhones/iPads; Apple invited a Microsoft exec to a product launch; and Apple’s TV app showed up on the Xbox. 

So where did things go south?

Bloomberg points to Apple’s Mac revamp announcement back in November, which featured “PC guy” John Hodgman. When Apple revived Hodgman’s character for a bit of fun, the Apple/Microsoft beef was arguably reignited. Intel followed suit, flipping Justin Long’s loyalty in their recent ad campaign. 

These subtle digs occur within the wider landscape of disagreement with Apple concerning their app store. Microsoft is one of many voices in tech that’s frustrated with the Cupertino company’s allegedly anticompetitive ecommerce practices.

Microsoft specifically clashed with Apple about a year ago when the companies couldn’t agree on how to host Microsoft’s all-in-one gaming platform, called xCloud. Microsoft wanted xCloud to give subscribers a plethora of games through a single app subscription, while Apple will only allow individually separate game downloads.

The ongoing Apple vs. Epic Games trial seems to have exacerbated the app store debacle. Microsoft has thrown support behind Epic, as a Redmond executive has already testified against Apple for thwarting Microsoft’s own gaming efforts with its app store policies.

The companies will also be competing in many of the same markets in the future. Apple has doubled down on AR, with a headset rumored to be arriving next year. As Microsoft’s Hololens has served as somewhat of a pioneer in AR headsets, a new piece of dedicated AR hardware from Apple could add insult to injury.

Both Apple and Microsoft will also have hats in the ring in cloud computing and AI – hot markets in the future of tech.

While surface tensions between corporations tend to be mostly tokenary, strife between these companies runs deep, with a history. Keep an eye out for a media fireworks show here and there.

Continue Reading

Artificial Intelligence

EU to US and China: “Build the AI and We’ll Regulate It.”

Jacob Rozansky

Published

on

Last week, the European Union presented draft legislation titled “The Artificial Intelligence Act.” (AIA) The landmark bill serves as one of the first large-scale attempts to reign in intelligent technologies as big data computing reaches new heights.

The bill seeks to regulate facial recognition, which opens avenues for governments to surveil citizens well beyond what is humanly possible. It also contends with so-called “social credit systems.” These projects, mainly spearheaded by China, assign moral weight to political action. Social credits punish political dissent with higher interest rates for loans, for example. Even associating with dissidents can hurt one’s “credit score.”

The law would fine defiant companies six percent of their yearly income or $20 million, whichever is higher. Representatives of the western tech industry derided the bill as punitive. Benjamin Mueller, a senior policy analyst for pro-business Washington think tank Center for Data Innovation, called the law “a damaging blow to the Commission’s goal of turning the EU into a global AI leader.” The US’s appetite for such regulation is low as Republicans favor lax laws. At the same time, Democrats receive immense funding from Silicon Valley.

Digital rights activists run the opposite flank, contending that the draft falls short. Biometric tracking –using fingerprints or DNA to trace people– remains unregulated in the bill’s current form. Prominent organizations will look to any loophole to capitalize on user data.

The Republic of China’s response is limited, though Chairman Xi Jinping has been vocal in his opposition to global governance by western powers. The Chinese Communist Party head appealed, “Any attempts to build walls and decouple are violating economic laws and market rules,” Xi was discussing unilateralism broadly. However, the eastern superpower’s sentiment is steady on the topic.

China has been operating in a world of western restrictions for decades and will likely play by the rules as long as they expand geopolitical influence.

Despite the European Union encompassing around 5 percent of the world’s populous, the bloc has an outsized influence on global regulation and norms. The 2016 EU law General Data Protection Regulation (GDPR) supervises all data collected in the region. According to Forbes, it has also served as a defacto framework for businesses abroad seeking to do business in union countries.

The AI law will likely undergo significant changes as it winds through the lawmaking process. Tech lobbyists, activists, and stakeholding countries will dictate the shape of any future regulations. Industrial forces will pursue legal terms to benefit from the law. The final version of the AIA will not pass for at least two years.

According to Human Rights Watch, 10 million Europeans live under authoritarian rule. The new law may keep technological forces at bay and encourage more open societies. Unfortunately, governments can subjugate without the help of automated systems.

The AI bill is a bold first step to regulating a powerful and complex technology. Critics note that by the time the text becomes law, it could be obsolete. But AI is here to stay, and many of the trends the EU is challenging will be relevant deep in the future.

Continue Reading

Artificial Intelligence

Medical Triumphs Using AI Continue to Mount

Jacob Rozansky

Published

on

When people interact with healthcare systems, it is a personal affair. Patients seek caring professionals to guide them through a vulnerable experience. At the systemic level, the world of medicine provides rich data for analytics. Artificial intelligence can give physicians insights that enhance care.

The first area where machine learning can support doctors is diagnostics. Reading questionnaires, charts and other bits of information to develop accurate diagnoses is often challenging, even for experienced physicians. In recent years, Google used machine vision to separate cancerous tumors from benign growths faster than professionals. While this system is imperfect, using it in conjunction with trained staff will hasten the process.

Researchers from the University of Wisconsin-Madison uncovered an early indicator of learning disabilities, autism, and other health risks using AI. By analyzing millions of records, they can pick out Fragile X Syndrome. The tool can diagnose learning disabilities up to five years earlier than other methods. By comparing DNA to health records, AI systems can discover relationships between genetic code and patient outcomes. Unaided, researchers would drown in the mountains of data produced by the medical system.

Even before the COVID-19 pandemic, many in the healthcare industry suffered from burnout. The leading cause of this burnout, according to Business Insider, is entanglement with hospital administrations. Cumbersome paperwork drains the energy from a shallow pool of clinicians. People don’t become doctors to deal with hundreds of forms and waivers. AI can accept bureaucracy’s robotic responsibilities.

Beyond the healthcare worker, chatbot assistants can collect patient information. Not only does this option offer a conversational interface, but incomplete information can be caught and corrected before reaching administrators. Chatbots are becoming more adept at speaking at the human level and can foster a more private environment for users.

Human touch is irreplaceable, but many hospitals cannot afford that luxury, especially during a pandemic. There are instances where a rubber glove filled with warm water is the closest a patient receives for end-of-life solace. Robotics companies are developing soft robots built with artificial intelligence for comfort. While targeted at older people, the technology promises to reduce stress for some patients. Delicate robot arms can also perform some surgical procedures, though robotic surgeons have a ways to go before widespread use.

Drug discovery represents another area where new computational techniques improve a data-heavy field in Australia; complications related to an anti-fungal agent perplexed chemists. It was only with AI analysis that an Australian graduate student discovered a complex process by which the drug turned deadly. The future for therapeutics runs directly through intelligent machines.

On the other side of the skin, wearable technology like smartwatches provides a detailed picture of health. Some wearables are pretty mundane, tracking things like heart rates. Others can reduce stress with electrical pulses. One tool in the works allows blind people to “see” using series of electric shocks. While many of these technologies do not rely on artificial intelligence, the data collected can be synthesized and mined for powerful patterns.

The hope is that new computational methods will improve the healthcare experience and bring more humanity to the field.

Continue Reading
Advertisement

Sign Up For The Latest Bite Sized Tech News


Trending

    0
    Would love your thoughts, please comment.x
    ()
    x