We may earn a small commission when you click or purchase an item using a link on this website.
AI, like its’ creators, can discriminate. This reality does not reflect some bigotry within the circuits but mirrors the data fed to a system. AI bias is an anomaly in autonomous systems based on social disparity. These biases are inevitable as AI’s primary purpose is to uncover patterns. There are a few notable examples of AI bias worth mentioning to understand how it works.
Facial recognition algorithms for policing are pretty accurate — over ninety percent. However, it works worst for Black Americans. Similar findings have driven many tech companies to halt facial recognition projects for use by law enforcement. The explanation for this lies in the data sets. Training data remains predominantly white and therefore fails to classify faces of color equally well.
In 2014, Amazon turned to AI to hire software engineers. It became clear the machine discriminated against female applicants. It viewed all-women colleges as inferior to the schools where male employees graduated. It even graded phrases like “women’s rugby team” worse than “rugby team.” Of course, Amazon’s software team was predominantly male from the start. The computer learned its values through the data provided.
This problem threatens to depress trust in intelligent systems. Luckily, there are solutions. As of 2021, some tech forces are spearheading efforts to deal with algorithmic bias. IBM and Google encourage developers to implement open-source tools to root out discrimination. Algorithmic bias seems to be solvable on the technical level, but other considerations remain.
If pro-fairness tools remedy racial, religious, and gendered disparity, other groups will allege unfairness. If refused an elective surgery, one may feel the algorithm rejected them for physical characteristics like weight. Weight is a health indicator that doctors consider. Indeed, eliminating specific information may reduce “bias” while lowering accuracy. How developers should balance accurate outcomes with social responsibility is an open question.
Even with powerful tools to reduce bias, will developers use these tools when available? Will they be novelty or necessity? There isn’t a concrete ethical code for all AI users. If a company is looking for a quick, cheap, effective classification system, bias might seem irrelevant. Such an audit could be very far down the list of priorities.
The world welcomes and fosters systemic bias. Minorities already deal with systems that reject, interrogate and incarcerate. For the powerful, biases within AI systems will be features, not bugs. If a government seeks to benefit dominant groups, why would they seek to equalize an AI that does it for them? It’s an opportunity for plausible deniability. When confronted with evidence of bias, the perpetrators may play dumb. The real world is far from color blind, and differences in health and wealth may stem from history. Context is challenging to teach a robot.
Rooting out discrimination is possible, and the tools available will improve. That doesn’t mean bias will disappear. It will take work on the part of business leaders and governments worldwide to ensure that intelligent machines work for all people.
Bloomberg Says Apple and Microsoft Are Beefing Again
In recent years, we had reason to believe Apple and Microsoft were mending fences. But according to new analysis from Bloomberg, the tech giants may be back at it like Godzilla vs. Kong.
Claims of an improved relationship between the two companies were not unfounded: The Microsoft Office app became available on iPhones/iPads; Apple invited a Microsoft exec to a product launch; and Apple’s TV app showed up on the Xbox.
So where did things go south?
Bloomberg points to Apple’s Mac revamp announcement back in November, which featured “PC guy” John Hodgman. When Apple revived Hodgman’s character for a bit of fun, the Apple/Microsoft beef was arguably reignited. Intel followed suit, flipping Justin Long’s loyalty in their recent ad campaign.
These subtle digs occur within the wider landscape of disagreement with Apple concerning their app store. Microsoft is one of many voices in tech that’s frustrated with the Cupertino company’s allegedly anticompetitive ecommerce practices.
Microsoft specifically clashed with Apple about a year ago when the companies couldn’t agree on how to host Microsoft’s all-in-one gaming platform, called xCloud. Microsoft wanted xCloud to give subscribers a plethora of games through a single app subscription, while Apple will only allow individually separate game downloads.
The ongoing Apple vs. Epic Games trial seems to have exacerbated the app store debacle. Microsoft has thrown support behind Epic, as a Redmond executive has already testified against Apple for thwarting Microsoft’s own gaming efforts with its app store policies.
The companies will also be competing in many of the same markets in the future. Apple has doubled down on AR, with a headset rumored to be arriving next year. As Microsoft’s Hololens has served as somewhat of a pioneer in AR headsets, a new piece of dedicated AR hardware from Apple could add insult to injury.
Both Apple and Microsoft will also have hats in the ring in cloud computing and AI – hot markets in the future of tech.
While surface tensions between corporations tend to be mostly tokenary, strife between these companies runs deep, with a history. Keep an eye out for a media fireworks show here and there.
EU to US and China: “Build the AI and We’ll Regulate It.”
Last week, the European Union presented draft legislation titled “The Artificial Intelligence Act.” (AIA) The landmark bill serves as one of the first large-scale attempts to reign in intelligent technologies as big data computing reaches new heights.
The bill seeks to regulate facial recognition, which opens avenues for governments to surveil citizens well beyond what is humanly possible. It also contends with so-called “social credit systems.” These projects, mainly spearheaded by China, assign moral weight to political action. Social credits punish political dissent with higher interest rates for loans, for example. Even associating with dissidents can hurt one’s “credit score.”
The law would fine defiant companies six percent of their yearly income or $20 million, whichever is higher. Representatives of the western tech industry derided the bill as punitive. Benjamin Mueller, a senior policy analyst for pro-business Washington think tank Center for Data Innovation, called the law “a damaging blow to the Commission’s goal of turning the EU into a global AI leader.” The US’s appetite for such regulation is low as Republicans favor lax laws. At the same time, Democrats receive immense funding from Silicon Valley.
Digital rights activists run the opposite flank, contending that the draft falls short. Biometric tracking –using fingerprints or DNA to trace people– remains unregulated in the bill’s current form. Prominent organizations will look to any loophole to capitalize on user data.
The Republic of China’s response is limited, though Chairman Xi Jinping has been vocal in his opposition to global governance by western powers. The Chinese Communist Party head appealed, “Any attempts to build walls and decouple are violating economic laws and market rules,” Xi was discussing unilateralism broadly. However, the eastern superpower’s sentiment is steady on the topic.
China has been operating in a world of western restrictions for decades and will likely play by the rules as long as they expand geopolitical influence.
Despite the European Union encompassing around 5 percent of the world’s populous, the bloc has an outsized influence on global regulation and norms. The 2016 EU law General Data Protection Regulation (GDPR) supervises all data collected in the region. According to Forbes, it has also served as a defacto framework for businesses abroad seeking to do business in union countries.
The AI law will likely undergo significant changes as it winds through the lawmaking process. Tech lobbyists, activists, and stakeholding countries will dictate the shape of any future regulations. Industrial forces will pursue legal terms to benefit from the law. The final version of the AIA will not pass for at least two years.
According to Human Rights Watch, 10 million Europeans live under authoritarian rule. The new law may keep technological forces at bay and encourage more open societies. Unfortunately, governments can subjugate without the help of automated systems.
The AI bill is a bold first step to regulating a powerful and complex technology. Critics note that by the time the text becomes law, it could be obsolete. But AI is here to stay, and many of the trends the EU is challenging will be relevant deep in the future.
Medical Triumphs Using AI Continue to Mount
When people interact with healthcare systems, it is a personal affair. Patients seek caring professionals to guide them through a vulnerable experience. At the systemic level, the world of medicine provides rich data for analytics. Artificial intelligence can give physicians insights that enhance care.
The first area where machine learning can support doctors is diagnostics. Reading questionnaires, charts and other bits of information to develop accurate diagnoses is often challenging, even for experienced physicians. In recent years, Google used machine vision to separate cancerous tumors from benign growths faster than professionals. While this system is imperfect, using it in conjunction with trained staff will hasten the process.
Researchers from the University of Wisconsin-Madison uncovered an early indicator of learning disabilities, autism, and other health risks using AI. By analyzing millions of records, they can pick out Fragile X Syndrome. The tool can diagnose learning disabilities up to five years earlier than other methods. By comparing DNA to health records, AI systems can discover relationships between genetic code and patient outcomes. Unaided, researchers would drown in the mountains of data produced by the medical system.
Even before the COVID-19 pandemic, many in the healthcare industry suffered from burnout. The leading cause of this burnout, according to Business Insider, is entanglement with hospital administrations. Cumbersome paperwork drains the energy from a shallow pool of clinicians. People don’t become doctors to deal with hundreds of forms and waivers. AI can accept bureaucracy’s robotic responsibilities.
Beyond the healthcare worker, chatbot assistants can collect patient information. Not only does this option offer a conversational interface, but incomplete information can be caught and corrected before reaching administrators. Chatbots are becoming more adept at speaking at the human level and can foster a more private environment for users.
Human touch is irreplaceable, but many hospitals cannot afford that luxury, especially during a pandemic. There are instances where a rubber glove filled with warm water is the closest a patient receives for end-of-life solace. Robotics companies are developing soft robots built with artificial intelligence for comfort. While targeted at older people, the technology promises to reduce stress for some patients. Delicate robot arms can also perform some surgical procedures, though robotic surgeons have a ways to go before widespread use.
Drug discovery represents another area where new computational techniques improve a data-heavy field in Australia; complications related to an anti-fungal agent perplexed chemists. It was only with AI analysis that an Australian graduate student discovered a complex process by which the drug turned deadly. The future for therapeutics runs directly through intelligent machines.
On the other side of the skin, wearable technology like smartwatches provides a detailed picture of health. Some wearables are pretty mundane, tracking things like heart rates. Others can reduce stress with electrical pulses. One tool in the works allows blind people to “see” using series of electric shocks. While many of these technologies do not rely on artificial intelligence, the data collected can be synthesized and mined for powerful patterns.
The hope is that new computational methods will improve the healthcare experience and bring more humanity to the field.