How do we build trust in machine learning models
2021
Introduction
Artificial intelligence (AI) systems and machine learning algorithms are rapidly being used in both the
private and public sectors to simplify basic and complex decision-making processes. Most economic
sectors, including transportation, retail, advertising, and electricity, are being disrupted by data
digitization on a large scale, as well as the emerging technologies that use it. Computerized systems
are being implemented to increase precision and drive objectivity in government operations, and AI is
having an effect on democracy and governance.
Computers have made it simple to extract new insights thanks to the availability of large data sets. As
a result, algorithms have evolved into more complex and ubiquitous methods for automated decisionmaking. Algorithms are a series of step-by-step instructions that computers obey to complete a task.
Hiring, advertisement, criminal punishment, and lending decisions were all made by humans and
organizations in the pre-algorithm era. These decisions were often regulated by federal, state, and
local laws that set standards for justice, openness, and equality in decision-making (Lee, Resnick, &
Barton, 2019). Today, some of these decisions are taken or influenced entirely by computers, whose
size and statistical rigor promise previously unheard-of efficiencies. Algorithms are using large
amounts of macro- and micro-data to influence decisions affecting people in a variety of activities,
ranging from movie recommendations to assisting banks in determining a person's creditworthiness.
Algorithms in supervised machine learning depend on multiple data sets, or training data, that specify
the correct outputs for specific people or artifacts. It then learns a model that can be applied to other
people or artifacts and predicts what the correct outputs should be for them based on the training
data (Lee et al., 2019).continue reading....
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
12
References
0
Citations
NaN
KQI