World Deserves Technology that Safeguards Human Rights

World Deserves Technology that Safeguards Human RightsWith the advancements in computer science, developers have created algorithms that are so powerful that they can help us make decisions more efficiently; right from loan approvals, hiring, and parole eligibility, to patient care. What some of these inventors didn't expect was that many of those machine-made decisions would be influenced by human biases.

With a name like the AI Bill of Rights, you'd think that robots and machines are being given the same ethical protection as humans. In reality, ExpressVPN research showed AI Bill of Rights is intended to safeguard the public from the harm that automated systems might cause through their various algorithms—a phenomenon known as artificial intelligence prejudice or AI bias.

While prejudice in AI is rarely intentional, it is unavoidable. And, while there is no definite answer as to what exactly causes AI bias, some acknowledged contributors include:

Creator bias: While algorithms and software are supposed to emulate people by identifying specific patterns, they can occasionally inherit their designers' unconscious preconceptions.

Data-driven bias: Some artificial intelligence is taught to acquire knowledge by observing patterns in data. AI, as a competent learner, will exhibit bias if a dataset has prejudice.

Bias through interaction: 'Tay,' Microsoft's Twitter-based chatbot, is a classic example. Tay, which was designed to learn from user interactions, regrettably only lasted 24 hours before being shut off after becoming increasingly racist and misogynistic. For example, as mentioned by Indian Express, in some situations, Tay referred to feminism as “cult” or “cancer.

Latent bias: When an algorithm mistakenly associates concepts with gender and race prejudices. For example, suppose AI associates the term "doctor" with men just because male figures dominate stock pictures.

Selection bias: If the data used to train the algorithm over-represents one population, it is likely that the algorithm would perform better for that population at the expense of other demographic groups (as seen with the Latino couple above).

Over the last few years, it has become evident that the robots designed to automate human decision-making are also contributing to broad ethical concerns. Not unexpectedly, this has resulted in calls for the United States government to create an algorithmic bill of rights that safeguards the American people's civil rights and liberties—a call that they have ultimately heeded.

Unfortunately, there is no simple answer. Unlike the more well-known Bill of Rights, which is made up of the first 10 amendments to the United States Constitution, the AI version has yet to become binding legislation (thus the word "blueprint"). This is due to the fact that the OSTP is a White House body that advises the president but cannot pass legislation.

This means that adhering the recommendations outlined in the nonbinding white paper (as defined in the blueprint) is entirely discretionary. As a result, the AI version of the Bill should be viewed as more of a teaching tool outlining how government agencies and technology corporations should keep their AI systems safe so that their algorithms do not become biased in the future.

An AI system is only as good as its input data. AI bias can technically be eliminated if developers follow the Blueprint's suggestions and intentionally create AI systems with responsible AI principles in mind. While the Blueprint is a step in the right direction, experts warn that until the AI Bill of Rights is made law, there will be too many gaps that allow AI prejudice to go unnoticed.

"While this Blueprint does not provide us with everything we have been calling for, it is a blueprint that should be exploited for better consent and equity. Next, we need lawmakers to create government policy that enacts this strategy," says the Algorithmic Justice League, a group dedicated to fighting AI-based prejudice.

At this point, no one knows when or how long it will take for this to happen. Meredith Broussard, an NYU data journalism professor and author of Artificial Unintelligence, predicts a "carrot-and-stick situation." 

"There will be a call for voluntary cooperation," she continues. And then we'll see that it doesn't work, and there will be a demand for enforcement." We're hoping she's onto something. Humanity deserves technology that safeguards our fundamental rights.

Current Issue

TheHigherEducationReview Tv