Written by Michael Feder
Reviewed by听Kathryn Uhles, MIS, MSP,听Dean, College of Business and IT
The use of artificial intelligence (AI) is on the rise. Projections by Grand View Research point to a between 2023 and 2030.
In practical terms, AI can look many different ways. Major corporations have already . Meanwhile, tech firms are racing to develop technology such as artificial neural networks, which try to mimic the .
Despite the growing use of AI, many have brought up concerns about ethics and the necessary governance it engenders. After all, this is a relatively new field with far-reaching ramifications that are both helpful and concerning.
To get a better understanding of AI and its ethical debate, we sat down with Joseph Aranyosi, an associate dean of the College of Business and Information Technology at 七色视频.听
found that healthcare, marketing and market research, retail and manufacturing are industries currently using responsible AI to automate and streamline processes and cut costs.
In particular, many organizations have been听, according to a 2022 report by McKinsey & Company. But consumer behavior and customer data analysis (especially in marketing) have been around for about 100 years now, so it's not really a new field or activity, Aranyosi says. However, the use of AI to auto-analyze customer data for trends, preference, and predictive modeling is relatively new. AI data analysis has been applied in various industries, such as:
When personal data is collected, security and guidelines for ethical use are of paramount importance. Every organization that collects customer or patient data has an ethical obligation to protect it and share it out on an as-needed basis. Most major businesses have data governance committees and policies that provide these guidelines and monitor the use of AI in data analyses, Aranyosi says.
According to PricewaterhouseCoopers, . Algorithm-powered software can assess diagnostic images and test results to find illnesses with 99% accuracy.听Health information technology uses programs to analyze large amounts of patient data for insights on treatment outcomes and disease risks. However, only qualified healthcare professionals should make diagnoses. AI is just a tool they use in the evaluation process to recommend additional testing and verification.
Despite the potential benefits of responsible AI use, many patients are concerned about the potential threat to their health data security and privacy. The Pew Research Center found that, in principle,听 in their healthcare. The ethical concerns are:
While AI programs can automate institutional-level trading, the software can automate time-consuming processes like depositing checks, onboarding new customers and processing loan applications.
Artificial intelligence can also crunch vast amounts of data and provide financial insights that can guide risk management and decision-making. Meanwhile, AI-powered financial advisors can assess investors鈥 needs and goals and offer advice for a fraction of the cost of a human advisor.
Joseph Aranyosi
Associate Dean,听 College of Business and Information Technology听
Similar to other industries, AI-based finance innovations have brought serious ethical concerns about data privacy. The financial sector experienced听than any other industry, except healthcare, in 2022. The use of artificial intelligence could exacerbate that vulnerability. Also,听in AI-based听loan processing can make it more difficult听for some disenfranchised groups to obtain credit.
Aranyosi also observes: 鈥淎I is never 100% objective since all programming is done by humans. So, if an AI is tasked to analyze data using biased standards, then it鈥檚 going to give biased results. In IT, you often hear the phrase 鈥楪arbage in, garbage out,鈥 and this definitely applies to AI. The problem is that most of these systems/processes are proprietary and not open to objective review, so it鈥檚 difficult to find and prove that such AI analyses are biased without additional scrutiny or regulatory oversight.鈥
AI has in several ways. The use of robotics to automate the production process was one of the first examples of artificial intelligence. Today, embedded systems can monitor automated systems, collecting data that can improve efficiency, increase quality and predict maintenance needs for equipment.
Artificial intelligence also analyzes inventory, supply chain and demand data to help manufacturers evaluate if they have enough materials to meet customer needs.
Because manufacturing-related systems require embedded software, smart machinery and complex computer systems, implementation can be expensive. Also, AI-powered equipment can make employees redundant at certain companies. Automation advances spurred by the COVID-19 pandemic could lead to by 2025.听
According to the World Economic Forum (WEF), AI is expected to replace by 2025. However, the report goes on to say that it will also create 97 million new jobs in that same time frame.
The WEF suggests and will bring more. Software for customer relationship management and marketing relies on AI to predict demand and analyze vast amounts of customer data to inform and assess marketing strategies. Because these insights come quickly with the rapid processing power of AI, marketers can get real-time insights as a campaign is unfolding.
AI can also detect fraudulent charges, monitor sensors and security systems to prevent theft and optimize operations in a way that lowers overhead for both in-person and online businesses. These lower costs could theoretically be passed along to consumers.
However, these conveniences have potential to render human employment redundant and cause job loss in the retail sector.
Law enforcement agencies have always collected data on crimes and criminals to find areas to focus their efforts. allow for enhanced data analysis for vast amounts of information.
Software can also analyze forensic evidence and provide proof from specific data, such as the sound of a gun or the layout of a crime scene. Tools like facial recognition can also help police find criminals and identify suspects.
AI in policing and criminal justice raises questions of bias and privacy issues. There are also concerns that certain types of monitoring violate privacy rights because people are recorded or have their identity checked even though they are not under suspicion.听听
AI through the automation of repetitive tasks, risk assessment and analysis, and fraud detection. AI can handle manual data collection and basic assessment tasks, support decision-making and improve risk assessment for insurance underwriters reviewing policy applications and calculating premiums.
is another area where AI benefits insurance companies. Algorithm-powered software can collect claims data and look for patterns of potential fraud. These systems can detect complex patterns that would escape the notice of human fraud prevention specialists.
Of course, the other side of this coin is just how much data is collected, especially without people鈥檚 knowledge or consent.听
Most of the press related to AI in transportation focuses on. Smart safety features have become common in cars, and partial autonomous systems improve safety on ships, trains and airplanes.
Algorithm-powered software can also help with airline scheduling and traffic congestion. It can be leveraged to find optimal routes to reduce transit time and limit carbon emissions as well.
Transportation-related AI systems can be expensive, however. For personal cars, this often means safety features are available on newer vehicles, which consumers would have to purchase. Also, newer tech still requires human oversight and, in case of malfunction, intervention.
Responsible AI is transforming business organizations and services in obvious ways, but it has also found its way into people鈥檚 everyday lives.
Aranyosi explains: 鈥淢ost of us are familiar with the use of AI in the Internet of Things [IoT], such as the Siri, Alexa, or Google Assistant apps. Many home appliances are now internet-connected devices that can automate processes, share maintenance recommendations, compile grocery lists, help navigate us through traffic, etc. Apple Watch is a great example of wearable AI technology. All of these are designed to make our lives easier and keep us informed, so although there鈥檚 still the possibility for the ethical misuse of personal data, in most cases AI can be used responsibly to simplify and improve our lives.鈥
Here are a few examples of artificial intelligence in your home, vehicle and pocket:
Although these tools might not be as sophisticated as those employed by corporations, they still offer real benefits and give rise to many of the same concerns.
Whether in an organization or at an individual, personal level, responsible AI has benefits:
Responsible AI can also offer insights that help companies reduce costs, limit environmental impact and bring other positive attributes to their business operations.
鈥淕ood examples of responsible AI are health- and sleep-monitoring apps, which can provide us with helpful reminders, updates and recommendations for improving our exercise routines or eating patterns,鈥 Aranyosi says.
Though ethical concerns vary according to industry and organization, there are broader concerns that affect all AI uses. These usually center on ethics.
鈥淎I is a rapidly-growing area of research. Like most new technology, it鈥檚 not going to be perfect right out of the gate,鈥 says Aranyosi. 鈥淩esearchers, businesses, educational institutions and other organizations continue to look for ways to build better diversity, equity, inclusion and belongingness into AI tools and are actively seeking feedback from a wide variety of constituencies that can help to improve these tools through responsible use, security, safety and representation. It鈥檚 important that we continue to have healthy dialogues about the benefits and concerns of AI to ensure that we鈥檙e appropriately meeting everyone鈥檚 needs.鈥
points to instances of bias in one of the world鈥檚 most commonly used artificial intelligence systems: internet search engines. Type in an image prompt for 鈥渟chool girls,鈥 for example, and you鈥檙e likely to get options very different in nature than if you prompted 鈥渟chool boys.鈥
Bias can also impact the decisions AI systems make. Despite using complex algorithms and seeming to act independently, human programmers coded the software and created the algorithms. They may have added factors that favor certain types of data. This favoritism may be unintentional, but it can have a real impact when it comes to law enforcement, loan applications or insurance underwriting.
鈥淎I development needs to include input from a wide, diverse pool of researchers, developers and users. This input needs to include representation from historically marginalized groups to ensure that we鈥檙e not unintentionally skewing the data sets and algorithms being used,鈥 Aranyosi says.
He continues: 鈥淥bjectivism, accuracy, quality assurance, feedback and other mechanisms need to be built into the development process to ensure that such unintended consequences can be addressed quickly and to everyone鈥檚 satisfaction. Although bias is an inherent part of being human, we can better address bias collectively by ensuring that everyone has a voice in the conversation and that these concerns are taken seriously to make meaningful improvements.鈥
AI products often have a. On the one hand, AI software developers may be tempted to release information about their products to prove they do not have a bias or privacy problem. On the other, doing so could expose security vulnerabilities to hackers, invite lawsuits and allow competitors to gain insights into their processes.听
Governments and organizations use AI to assess video feeds, images and data to conduct surveillance on private citizens. These practices raise听 and cause concern about the power that personal data and images give to governments or companies using AI-enhanced systems.听
With the right algorithms, a website or app can display content meant to. In the wrong hands, this type of system has potential to control people鈥檚 opinions and spur them to take action in support of an unscrupulous leader using deepfakes and other AI-manipulated content.听
AI allows businesses to streamline operations and automate processes. A report by Goldman Sachs found that as many as could be exposed to AI automation over the next 10 years but that most of those jobs would be complemented by AI rather than replaced.
is a hypothetical point in the future when tech and AI are such a part of life that they irreversibly change humanity and the world. The theory suggests that AI could take on human qualities, such as curiosity and desire, and that computerized brain implants and other advances would allow humans to develop superintelligence.
Critics write off singularity as a fantasy, but others are concerned that specific aspects of technology could irreversibly alter the world.听
To prevent ethical dilemmas from becoming further realized, guidelines and frameworks have been proposed that aim to maintain听ethical AI use in the workplace, at home and in wider society.听
is when medical providers give patients information about a treatment, procedure, medication or medical trial. Once they understand the potential benefits and risks, patients can decide whether to proceed.
Informed consent can extend to AI-powered systems, with physicians telling patients about the data collected, accuracy and other factors arising from AI. Patients can weigh the decision, and providers will need to respect their wishes.
Software makers can of their artificial intelligence algorithms before deploying them. The first step is to define fairness and then assess data fed into the algorithm, the algorithm itself and the results to ensure each step fits the definition.
Developers can also test their algorithm to ensure certain variables 鈥 such as age, gender or race 鈥 do not alter the results.听
AI yields impressive results because it uses huge amounts of data. Much of this is personal data that people might want to protect and not have exposed. Hackers target sites with more than any other data type.
In addition to enhancing database security, companies and organizations can听 or decoupling data from users before feeding it into the AI system.听
AI safety involves defining potential risks and taking steps to avoid catastrophic incidents due to AI failures. The process includes monitoring performance and operations to look for malicious activity or estimating the algorithm鈥檚 level of certainty in each particular decision.
Transparency can also help with safety because it would give third parties a chance to assess the algorithms and find vulnerabilities.听
In the not-so-distant future, the integration of artificial intelligence is set to revolutionize diverse professional landscapes. From streamlining complex decision-making processes to enhancing productivity, AI is poised to become an indispensable and responsible ally in the workplace. As this technology advances, professionals in various fields can anticipate a shift in their roles, with routine tasks increasingly automated, freeing up valuable time for higher-order thinking and strategic endeavors.
The advent of responsible AI is likely to reshape how professionals approach problem-solving and innovation. Machine learning algorithms will empower individuals to leverage vast amounts of data, extracting actionable insights and fostering more informed and responsible decision-making. In this evolving landscape, there emerges a shared responsibility among both human professionals and AI systems to uphold ethical standards and ensure the responsible use of technology.
Collaboration between human expertise and the responsible capabilities of AI is expected to unlock new frontiers, pushing the boundaries of what is currently achievable within the professional realm. Moreover, the personalization of responsible AI tools is set to create tailored experiences for individuals, optimizing workflow and boosting overall efficiency. Professionals may find themselves working alongside intelligent systems that adapt to their preferences and learning styles, creating a symbiotic and responsible relationship that amplifies human potential.
While some may express concerns about job displacement, the prevailing narrative suggests a transformation rather than outright replacement. Responsible AI is poised to augment human abilities, allowing professionals to focus on tasks that require emotional intelligence, creativity, and nuanced problem-solving 鈥 areas where machines currently struggle to excel.
As the future unfolds, the integration of responsible AI into various careers holds the promise of unlocking unprecedented opportunities for growth, innovation and professional fulfillment while emphasizing the ethical and responsible use of these powerful technologies.
Aranyosi notes that 七色视频鈥檚 data science degrees 鈥渃over AI, machine learning, deep learning, and data analysis and use. Many of our IT degrees include content on programming, software development, database administration, networking, data standards, security, ethics and other related topics.鈥
If a career in data science interests you, learn more about the Bachelor of Science in Data Science and Master of Science in Data Science at University Phoenix. There are also other online programs in information technology to consider if you鈥檙e seeking foundational IT knowledge.
A graduate of Johns Hopkins University and its Writing Seminars program and winner of the Stephen A. Dixon Literary Prize, Michael Feder brings an eye for detail and a passion for research to every article he writes. His academic and professional background includes experience in marketing, content development, script writing and SEO. Today, he works as a multimedia specialist at 七色视频 where he covers a variety of topics ranging from healthcare to IT.
Currently Dean of the College of Business and Information Technology,听Kathryn Uhles has served 七色视频 in a variety of roles since 2006. Prior to joining 七色视频, Kathryn taught fifth grade to underprivileged youth in Phoenix.
This article has been vetted by 七色视频's editorial advisory committee.听
Read more about our editorial process.
Read more articles like this: