Table of Contents
Many individuals have a limited understanding of artificial intelligence (AI). For instance, in a survey conducted in 2017, only 17 percent of 1,500 senior business leaders in the United States claimed to be familiar with AI. Some of them were uncertain about its nature and its potential impact on their respective companies. While they recognized the significant potential for transforming business processes, they lacked clarity on how to incorporate AI within their organizations.
Despite this widespread lack of familiarity, AI is a technology that is revolutionizing every aspect of our lives. It is a versatile tool that allows us to reconsider how we process information, analyze data, and leverage predictive capabilities to enhance decision-making. The purpose of this comprehensive overview is to explain AI to policymakers, opinion leaders, and interested individuals, and to showcase how AI is already changing the world, while raising important societal, economic, and governance questions.
This paper delves into new applications of AI in finance, public security, healthcare, criminal justice, transportation, and smart cities. It also addresses critical issues such as data accessibility, algorithmic bias, AI ethics and transparency, and the legal implications of AI-driven decisions. Additionally, it highlights the contrasting approaches taken by the United States and the European Union in regulating AI. The paper concludes by offering recommendations to maximize the benefits of AI while safeguarding core human values.
To maximize the benefits of AI, we propose the following nine steps:
- Encourage greater data access for researchers while upholding users’ personal privacy.
- Increase government funding for unclassified AI research to drive innovation.
- Promote new models of digital education and prioritize AI workforce development to equip employees with the necessary skills for the 21st-century economy.
- Establish a federal AI advisory committee to provide policy recommendations and guidance.
- Engage with state and local officials to ensure the implementation of effective AI policies at all levels.
- Focus regulatory efforts on establishing broad AI principles rather than specific algorithms.
- Address bias complaints seriously to prevent the replication of historical injustice, unfairness, or discrimination in AI systems’ data or algorithms.
- Maintain mechanisms for human oversight and control to ensure ethical and responsible use of AI technologies.
- Enforce penalties for malicious AI behavior and prioritize cybersecurity measures to safeguard against potential threats.
1. Artificial intelligence distinctive qualities
Although a universally accepted definition of artificial intelligence (AI) is lacking, it is generally understood to encompass machines that exhibit responses comparable to those of humans, considering human capacities for contemplation, judgment, and intentionality. According to researchers Shubhendu and Vijay, AI software systems are capable of making decisions that typically necessitate a high level of human expertise, enabling individuals to anticipate problems and address emerging issues. Consequently, AI operates intentionally, intelligently, and adaptively, showcasing its remarkable capabilities.
Intentionality
Artificial intelligence (AI) algorithms possess the capacity to make decisions, frequently leveraging real-time data. They stand apart from passive machines that can only provide mechanical or predetermined responses. By utilizing sensors, digital data, or remote inputs, these algorithms consolidate information from diverse sources, instantaneously analyze the collected data, and act upon the insights derived from it. The tremendous advancements in storage systems, processing speeds, and analytic techniques have endowed AI algorithms with a remarkable level of sophistication in conducting analysis and making decisions.
Intelligence
AI is typically coupled with machine learning and data analytics to drive its functionality. Machine learning algorithms analyze data to identify underlying patterns and trends. When relevant insights are discovered, software designers can utilize this knowledge to analyze specific problems effectively. The key requirement is the availability of robust data that enables algorithms to discern meaningful patterns. Data can take various forms, including digital information, satellite imagery, visual data, text, or unstructured data. By harnessing the power of AI, machine learning, and data analytics, organizations can derive valuable insights and make informed decisions based on diverse data sources.
Adaptability
AI systems possess the remarkable ability to learn and adapt while making decisions. In the realm of transportation, semi-autonomous vehicles are equipped with tools that inform drivers and vehicles about potential traffic obstacles such as congestion, potholes, or highway construction. Through the utilization of shared experiences from other vehicles on the road, these vehicles can benefit from collective knowledge without requiring human intervention. The wealth of accumulated “experience” can be instantaneously and comprehensively transferred to other vehicles with similar configurations. Leveraging advanced algorithms, sensors, and cameras, these systems incorporate real-time experiences into their operations. They present information through dashboards and visual displays, enabling human drivers to comprehend ongoing traffic and vehicle conditions. On the other hand, fully autonomous vehicles employ sophisticated systems that assume complete control over the vehicle and make all navigational decisions autonomously. Through the fusion of cutting-edge technologies, AI empowers the transportation industry to enhance safety, efficiency, and the overall driving experience.
2. Applicability of AI in diverse sector
Artificial Intelligence (AI) is not a distant vision but a present reality, actively integrated and deployed across various sectors such as finance, national security, healthcare, criminal justice, transportation, and smart cities. It has already begun making a significant impact on the world, augmenting human capabilities in remarkable ways. The growing role of AI is fueled by its potential for economic development, opening up tremendous opportunities for global growth.
A groundbreaking study conducted by PriceWaterhouseCoopers highlights the economic potential of AI, estimating that by 2030, AI technologies could increase global GDP by a staggering $15.7 trillion, equivalent to 14% of the total. These advancements include projected gains of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion in Africa and Oceania, $0.9 trillion in the rest of Asia (excluding China), $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China, in particular, is rapidly advancing in the AI landscape, aiming to invest $150 billion in AI and become a global leader in the field by 2030.
The McKinsey Global Institute further reveals that AI-led automation in China could inject a substantial productivity boost, adding 0.8 to 1.4 percentage points to annual GDP growth, depending on the speed of adoption. Although China currently trails the United States and the United Kingdom in AI deployment, its immense market size provides abundant opportunities for pilot testing and future advancements. With significant investments and a strategic focus on AI development, China is poised to shape the future of AI-driven innovation and its economic implications.
AI’s transformative impact is not a distant future but a present reality, with promising economic prospects and the power to reshape industries globally. The integration of AI into diverse sectors is already unlocking new possibilities and paving the way for a future where technology and human ingenuity converge to propel societies forward therefore reshaping our world.
National Security
AI plays a significant role in national defense, as demonstrated by initiatives like the American military’s Project Maven. This project aims to leverage AI technology to analyze vast amounts of surveillance data and alert human analysts to patterns, abnormal activities, or suspicious behavior. Deputy Secretary of Defense Patrick Shanahan emphasizes that emerging technologies in this field aim to meet the needs of warfighters while increasing the speed and agility of technology development and procurement.
The integration of big data analytics with AI is poised to revolutionize intelligence analysis, enabling near-real-time sifting through massive data sets. This empowers commanders and their staff with unprecedented levels of intelligence analysis and productivity. Additionally, AI’s impact extends to command and control structures, as routine decisions and, in certain circumstances, crucial decisions can be delegated to AI platforms. This delegation dramatically reduces decision-making time and facilitates swift action. The time-competitive nature of warfare underscores the importance of AI-assisted decision support and command systems, which can operate at speeds far surpassing traditional methods. This rapidity has even given rise to a new term, “hyperwar,” to encapsulate the speed at which future conflicts may be waged.
While ethical and legal debates persist regarding the use of AI-driven autonomous lethal systems in warfare, it is crucial to acknowledge that countries like China and Russia are advancing in this field without the same level of scrutiny. Anticipating the need to defend against hyperwar-capable systems, Western nations must grapple with the challenge of determining the appropriate level of human involvement in such scenarios. Additionally, the proliferation of zero-day cyber threats and polymorphic malware poses a significant challenge to traditional signature-based cyber protection methods. To enhance cyber defenses, a layered approach with cloud-based cognitive AI platforms is imperative. This approach fosters a “thinking” defensive capability that continuously trains on known threats, allowing for DNA-level analysis of previously unknown code and the potential to identify and halt inbound malicious code. This was exemplified when certain critical U.S.-based systems effectively neutralized the damaging “WannaCry” and “Petya” viruses.
Preparations for hyperwar and the defense of critical cyber networks must become high priorities, given the substantial investments made by China, Russia, North Korea, and other nations in AI research and development. China, for instance, has outlined plans to build a domestic AI industry worth nearly $150 billion by 2030, with companies like Baidu pioneering advanced applications such as facial recognition for locating missing persons. Moreover, cities like Shenzhen are providing significant support, including up to $1 million, for AI labs. China envisions AI playing a role in security, counterterrorism efforts, and improved speech recognition programs. The dual-use nature of many AI algorithms necessitates a comprehensive approach to research, ensuring that advancements in one sector can be swiftly adapted for security purposes.
To maintain competitiveness in this evolving landscape, prioritizing preparedness for hyper war scenarios and strengthening critical cyber networks is essential. The multifaceted nature of AI’s impact underscores the need for proactive measures and investments in research, development, and defense capabilities across sectors.
Finance
The United States witnessed a significant surge in investments in financial AI, reaching $12.2 billion by 2014, triple the amount recorded in 2013. In the financial sector, traditional decision-making processes for loans have been replaced by software that incorporates a wide range of finely parsed borrower data, moving beyond credit scores and background checks. Additionally, the rise of robo-advisers has revolutionized investment management by creating personalized investment portfolios, eliminating the need for human stockbrokers and financial advisers. These advancements aim to remove emotional biases from investment decisions, relying instead on analytical considerations and executing choices within minutes.
Stock exchanges provide a prominent example of how AI has transformed decision-making processes. High-frequency trading, conducted by machines, has largely replaced human intervention. Investors submit buy and sell orders, and computers match them instantly, capitalizing on trading inefficiencies and small-scale market differentials as instructed. With advanced computing power, these AI tools leverage the capacity of “quantum bits,” which store multiple values in each location, resulting in significantly enhanced storage capacity and reduced processing times. AI also plays a crucial role in fraud detection within financial systems. Identifying fraudulent activities can be challenging in large organizations, but AI algorithms excel at identifying abnormalities, outliers, and deviant cases that warrant further investigation. This proactive approach enables managers to identify and address issues early on, before they escalate to dangerous levels.
The integration of AI technologies in the financial sector demonstrates its immense potential in streamlining operations, enhancing decision-making processes, and mitigating risks such as fraud. These advancements signify a shift towards data-driven approaches that leverage the power of AI to extract valuable insights and optimize financial outcomes.
Criminal Justice System
AI is increasingly utilized in the criminal justice domain, with notable applications seen in Chicago’s implementation of an AI-powered “Strategic Subject List.” This system assesses individuals who have been arrested to determine their likelihood of becoming future offenders, ranking approximately 400,000 individuals on a scale of 0 to 500. The evaluation incorporates various factors such as age, criminal history, victimization, drug-related arrests, and affiliation with gangs. Analyzing the data revealed insightful findings, including the strong predictive value of youth as an indicator of violence, the association between being a shooting victim and the potential to become a future perpetrator, the limited predictive significance of gang affiliation, and the lack of significant correlation between drug arrests and future criminal activity. Through the application of AI, these insights contribute to more informed decision-making processes within the criminal justice system.
According to judicial experts, the integration of AI programs in law enforcement has the potential to mitigate human bias and foster a more equitable sentencing system. Caleb Watney, an associate at the R Street Institute, highlights that the objective nature of predictive risk analysis aligns well with the capabilities of machine learning, automated reasoning, and other AI methodologies. In fact, a policy simulation utilizing machine learning indicated that such programs could potentially result in a crime reduction of up to 24.8 percent without altering incarceration rates, or alternatively, decrease jail populations by up to 42 percent without leading to an increase in crime rates. These empirical findings underscore the transformative potential of AI in promoting fairness and efficiency within the realm of criminal justice.
Nonetheless, there are concerns raised by critics regarding the utilization of AI algorithms, deeming them as a potential “secret system” that unjustly penalizes individuals for crimes they have not yet committed. Criticisms point out that these risk assessment scores have been employed in several instances to facilitate mass arrests. The underlying fear is that these tools disproportionately target individuals of color, further exacerbating existing biases within the criminal justice system. Moreover, skeptics argue that despite their implementation, Chicago has not witnessed a significant reduction in the wave of murders that has afflicted the city in recent years. These apprehensions highlight the need for careful evaluation and scrutiny to ensure that AI systems are both transparent and unbiased in their approach to maintaining public safety.
Notwithstanding these apprehensions, there are other countries that are swiftly advancing in the deployment of AI technologies in this domain. China, for instance, possesses significant resources and extensive access to various forms of biometric data, such as voices, faces, and more, which provides them with a strong foundation for technological development. Utilizing cutting-edge advancements, it has become feasible to correlate images and voices with diverse information sources and employ AI algorithms on these merged datasets to enhance law enforcement capabilities and bolster national security. Through initiatives like the “Sharp Eyes” program, Chinese law enforcement agencies integrate video footage, social media activities, online transactions, travel records, and personal identification details into a comprehensive “police cloud” database. This integrated system enables authorities to monitor and track individuals involved in criminal activities, potential law violators, and even terrorists. Consequently, China has emerged as the leading global example of a surveillance state powered by AI technologies.
Health Care
AI tools are playing a crucial role in enhancing the computational capabilities within the healthcare field. A notable example is Merantix, a German company that leverages deep learning techniques to address medical challenges. One of their applications focuses on medical imaging, specifically the detection of lymph nodes in Computer Tomography (CT) images. By labeling these nodes and identifying small anomalies or growths that may pose a concern, Merantix’s technology offers a valuable solution. While this task can also be performed by humans, radiologists typically charge $100 per hour and can meticulously analyze only about four images within that timeframe. Considering a scenario with 10,000 images, the cost of such a manual process would reach an exorbitant $250,000, rendering it economically unfeasible.
Deep learning plays a vital role in training computers to distinguish between normal and irregular lymph nodes. By utilizing data sets and conducting imaging exercises, computers can learn the visual characteristics of healthy and potentially cancerous lymph nodes. Radiological imaging specialists can then apply this acquired knowledge to analyze patients’ actual images and determine their risk of cancerous lymph nodes. With the ability to identify unhealthy nodes amidst a majority of healthy ones, AI proves invaluable in enhancing diagnostic accuracy and efficiency.
Furthermore, AI has found application in addressing congestive heart failure, a prevalent condition affecting 10 percent of senior citizens and incurring a staggering $35 billion annual cost in the United States. AI tools offer valuable assistance by proactively predicting potential challenges and allocating resources to patient education, sensing, and proactive interventions. This approach aims to prevent hospitalizations by identifying and addressing issues before they escalate, thereby improving patient outcomes and reducing healthcare costs.
Transportation
Transportation is experiencing a profound transformation due to the integration of AI and machine learning, leading to significant advancements. A study conducted by the Brookings Institution’s Cameron Kerry and Jack Karsten revealed that the autonomous vehicle industry received over $80 billion in investments from August 2014 to June 2017. These investments encompass various applications, including autonomous driving and the underlying core technologies crucial for this sector’s development.
Autonomous vehicles, encompassing cars, trucks, buses, and drone delivery systems, leverage cutting-edge technological capabilities. These capabilities comprise automated vehicle guidance and braking, advanced lane-changing systems, utilization of cameras and sensors for collision avoidance, real-time analysis of information using AI algorithms, and the utilization of high-performance computing and deep learning systems to adapt swiftly to new situations with the aid of detailed maps.
Central to navigation and collision avoidance are Light Detection and Ranging systems (LIDARs) combined with AI. LIDAR systems employ a fusion of light and radar instruments, mounted on vehicle rooftops, to capture a comprehensive 360-degree view of the surrounding environment using radar and light beams. By measuring the speed and distance of objects, these systems, along with strategically placed sensors on the vehicle’s front, sides, and rear, provide critical information. This enables fast-moving vehicles to remain within their designated lanes, avoid collisions with other vehicles, and promptly apply brakes and steering when necessary, thereby ensuring immediate accident prevention.
The integration of cameras and sensors in autonomous vehicles results in the accumulation of vast amounts of data that must be processed in real-time to ensure safe maneuvering and avoidance of nearby vehicles. This necessitates the utilization of high-performance computing, sophisticated algorithms, and deep learning systems, which enable vehicles to swiftly adapt to new and dynamic scenarios. Consequently, the focal point lies in the software itself, rather than the physical automobile or truck. Advanced software empowers vehicles to learn from the collective experiences of other vehicles on the road, enabling them to dynamically adjust their guidance systems in response to changing weather conditions, driving patterns, or road conditions.
The potential of autonomous vehicles has captured the interest of ride-sharing companies, who recognize the advantages in terms of customer service and enhanced labor productivity. Major ride-sharing companies are actively exploring the possibilities presented by driverless cars. The rise of car-sharing and taxi services, exemplified by industry leaders such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo in Great Britain, and Didi Chuxing in China, demonstrates the tremendous potential of this transportation model. Notably, Uber recently made a significant move by entering into an agreement with Volvo to acquire 24,000 autonomous cars for integration into their ride-sharing service.
Nevertheless, the progress of the ride-sharing company faced a significant setback in March 2018 when one of its autonomous vehicles struck and fatally injured a pedestrian in Arizona. This unfortunate incident prompted Uber, along with various automobile manufacturers, to promptly suspend their testing programs and initiate thorough investigations to determine the causes and circumstances surrounding the tragedy. Both the industry and consumers alike are seeking reassurance that the technology is not only safe but also capable of fulfilling its intended pledges. Without compelling explanations and remedies, this accident has the potential to impede the advancement of AI in the transportation sector.
Smart Cities
Metropolitan administrations are leveraging AI technologies to enhance the provision of urban services. A case in point is the Cincinnati Fire Department, which has adopted data analytics to optimize its response to medical emergencies. Through an advanced analytics system, the dispatcher receives recommendations on the most suitable response for a medical emergency call, considering various factors such as the nature of the call, location, weather conditions, and historical data from similar incidents. This enables more informed decision-making, determining whether a patient can receive treatment on-site or requires transportation to the hospital, thereby improving the overall efficiency and effectiveness of emergency medical services.
With approximately 80,000 service requests annually, Cincinnati officials are leveraging this technology to prioritize responses and enhance emergency management. They recognize AI as a powerful tool for handling large volumes of data and identifying efficient approaches to address public needs. Rather than relying on ad hoc methods, authorities are adopting a proactive stance towards urban service delivery.
Cincinnati is not an isolated case in this regard. Many metropolitan areas are embracing smart city applications that harness AI to improve various aspects of urban life, including service delivery, environmental planning, resource management, energy efficiency, and crime prevention. According to Fast Company’s smart cities index, top adopters in the United States include Seattle, Boston, San Francisco, Washington, D.C., and New York City. Seattle, for instance, prioritizes sustainability and employs AI to manage energy consumption and resource allocation. Boston has implemented a “City Hall To Go” initiative to ensure equitable access to public services in underserved communities. Additionally, the city employs surveillance technologies such as cameras, inductive loops for traffic management, and acoustic sensors to detect gunshot incidents. San Francisco has achieved LEED sustainability certification for 203 buildings, demonstrating its commitment to environmental standards.
Metropolitan areas are at the forefront of implementing AI solutions, leading the way in the adoption of advanced technologies. In fact, a report by the National League of Cities reveals that 66 percent of cities across the United States are actively investing in smart city technology. The report highlights several key applications that have gained traction, including the use of smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and the integration of radio frequency identification sensors in pavement. These initiatives demonstrate the commitment of cities to leverage AI to enhance various aspects of urban life and improve the overall efficiency of public services.
3. Policy, Regulatory and Ethical issues
These diverse examples across various sectors vividly illustrate the transformative power of AI in shaping different aspects of human existence. The growing integration of AI and autonomous devices into our daily lives is revolutionizing core operations and decision-making processes within organizations, leading to increased efficiency and quicker responses.
However, these advancements also give rise to crucial policy, regulatory, and ethical considerations. How can we ensure equitable access to data? How do we prevent the use of biased or unfair data in algorithmic systems? What ethical principles should guide software programming, and to what extent should designers disclose their decision-making processes? Moreover, the issue of legal liability arises when algorithms result in harmful outcomes. Addressing these complex questions becomes imperative as we navigate the ethical and societal implications of AI integration.
Data Access Issues
Creating a “data-friendly ecosystem with unified standards and cross-platform sharing” is the key to maximizing the potential of AI. The success of AI development relies on real-time analysis of accessible data that can be effectively applied to practical challenges. Establishing data that is readily available for exploration within the research community is a fundamental requirement for successful AI advancement.
According to a study conducted by the McKinsey Global Institute, countries that promote open data sources and facilitate data sharing are more likely to witness significant progress in AI. In this aspect, the United States holds a notable advantage over China. Global assessments on data openness indicate that the United States ranks eighth worldwide, while China is positioned at 93.
However, the United States currently lacks a cohesive national data strategy. There is a scarcity of protocols to encourage research access or platforms that enable the extraction of valuable insights from proprietary data. Ownership of data and the extent of its public domain remain ambiguous. These uncertainties not only impede the innovation economy but also hinder academic research progress. In the following section, we will explore potential measures to enhance data accessibility for researchers.
Biasness of Data & Algorithm
In certain cases, specific AI systems have been associated with enabling discriminatory or biased practices. One notable example is the accusations against Airbnb, where homeowners on the platform were accused of engaging in discrimination against racial minorities. Research conducted by the Harvard Business School revealed that individuals with distinctly African American names were approximately 16 percent less likely to be accepted as guests compared to those with distinctly white names.
Facial recognition software also raises concerns related to racial issues. Many of these systems function by comparing an individual’s face to a database of various faces. As highlighted by Joy Buolamwini from the Algorithmic Justice League, “If your facial recognition data primarily consists of Caucasian faces, the program will primarily learn to recognize those.” Unless databases incorporate diverse data, these programs exhibit poor performance when attempting to identify features of African-American or Asian-American individuals.
Many historical data sets often reflect traditional values, which may or may not align with the desired preferences in a current system. This approach, as highlighted by Buolamwini, carries the risk of perpetuating past inequities. With the rise of automation and increased reliance on algorithms for crucial decisions such as insurance coverage, loan defaults, and recidivism risk assessments, it becomes imperative to address this issue. Even admissions decisions, determining the educational opportunities for our children, are increasingly becoming automated. It is essential that we do not carry forward the structural inequalities of the past into the future we are shaping.
AI Ethics & Transparency
Program decisions are influenced by ethical considerations and value choices embedded within algorithms. Consequently, these systems give rise to inquiries about the criteria employed in automated decision-making processes. There is a growing demand from individuals seeking a deeper understanding of algorithmic functioning and the specific choices being made.
In urban schools across the United States, enrollment decisions are increasingly being driven by algorithms that take into account various factors, including parent preferences, neighborhood characteristics, income level, and demographic background. Jon Valant, a researcher at the Brookings Institution, highlights the example of Bricolage Academy in New Orleans, which allocates up to 33 percent of available seats to economically disadvantaged applicants as a priority. However, in practice, many cities have opted for categories that prioritize siblings of current students, children of school employees, and families residing within the school’s general vicinity. As a result, enrollment outcomes can vary significantly depending on the inclusion of such considerations.
The setup of AI systems can have varying implications, as they have the potential to facilitate discriminatory practices in mortgage applications, enable personal biases against certain individuals, or contribute to the creation of biased lists or screenings based on unfair criteria. The considerations taken into account during the programming of these systems significantly influence their functioning and the impact they have on customers. It is crucial to carefully evaluate the factors and criteria involved in programming decisions to ensure fairness and avoid discriminatory outcomes.
In May 2018, the European Union (EU) is implementing the General Data Protection Regulation (GDPR) to address these concerns. The GDPR includes provisions that grant individuals the right to opt out of personalized advertisements and allows them to challenge algorithmic decisions that have legal or significant consequences. Individuals are also entitled to seek human intervention and obtain explanations regarding the specific processes used by algorithms to generate certain outcomes. These guidelines aim to safeguard personal data and promote transparency by shedding light on the functioning of algorithmic systems, which are often referred to as “black boxes.”
Legal Liabilities
The legal responsibility surrounding AI systems raises significant concerns. In cases where harm, violations, or even fatalities occur, the operators of the algorithm are likely to be subject to product liability regulations. Precedents from past legal cases have demonstrated that liability is determined by the specific circumstances of each situation, and the severity of penalties can vary from civil fines to imprisonment for severe damages. The recent incident involving Uber in Arizona, which resulted in a fatality, will serve as a crucial test case for assessing legal liability. The state actively encouraged Uber to conduct tests on its autonomous vehicles and granted the company substantial flexibility in terms of road testing. It remains uncertain whether lawsuits will arise from this incident and which parties will be held accountable, including the human backup driver, the state of Arizona, the Phoenix suburb where the accident occurred, Uber, software developers, or the automobile manufacturer. Given the involvement of multiple individuals and organizations in the road testing, numerous legal considerations must be addressed and resolved.
In domains outside of transportation, digital platforms frequently impose limited liability for the activities taking place on their websites. For instance, Airbnb enforces a requirement that individuals must waive their right to sue or participate in class-action lawsuits or arbitrations as a condition for using the service. By compelling users to relinquish fundamental rights, the company restricts consumer protections and hampers individuals’ ability to challenge discriminatory practices stemming from unfair algorithms. However, whether the concept of neutral networks applies effectively across various sectors remains to be widely established and determined.
4. Suggestions & Recommendations
To strike a balance between innovation and fundamental human values, we put forth several recommendations for advancing AI. These recommendations encompass enhancing data accessibility, augmenting government investments in AI, fostering the development of AI workforce, establishing a federal advisory committee, engaging with state and local authorities to ensure the implementation of effective policies, regulating overarching objectives rather than specific algorithms, acknowledging bias as a significant concern in AI, preserving mechanisms for human control and oversight, and imposing penalties for malicious activities while promoting cybersecurity.
Data Access Improvisation
To foster innovation and safeguard consumer interests, it is crucial for the United States to formulate a comprehensive data strategy. Currently, there is a lack of standardized guidelines pertaining to data access, sharing, and protection. Most data remains proprietary and is not extensively shared with the research community, which hampers innovation and system development. Artificial intelligence heavily relies on data for testing and enhancing its learning capabilities. Without comprehensive access to structured and unstructured data sets, the true potential of AI cannot be fully realized.
In particular, the research community requires improved access to government and business data, while implementing necessary safeguards to prevent misuse similar to the Cambridge Analytica incident involving Facebook data. Various approaches can be explored to facilitate data access for researchers. One such approach involves voluntary agreements with companies that possess proprietary data. For instance, Facebook recently announced a collaboration with Stanford economist Raj Chetty to investigate inequality using its social media data. Under this arrangement, researchers underwent background checks and accessed data exclusively through secured platforms to ensure user privacy and security.
For a considerable time, Google has provided aggregated search results to researchers and the public at large. This offering, known as the “Trends” site, allows scholars to analyze various subjects, including public interest in Trump, attitudes toward democracy, and perspectives on the overall economy. This valuable resource enables individuals to monitor shifts in public interest and identify topics that capture the attention of the general population.
Twitter offers researchers access to a substantial amount of its tweets through application programming interfaces (APIs). These APIs enable external individuals to develop application software and utilize data from the social media platform. Researchers can leverage these tools to examine patterns in social media communications and analyze how users are commenting on and reacting to various current events. This access to data enhances the study of social media dynamics and provides valuable insights into public discourse.
In certain sectors where a clear public benefit exists, governments can play a crucial role in facilitating collaboration by establishing data-sharing infrastructure. An excellent example is the National Cancer Institute, which has spearheaded a data-sharing protocol enabling certified researchers to access de-identified health data derived from clinical records, claims information, and drug therapies. This empowers researchers to assess the effectiveness and efficiency of medical approaches and offer recommendations while preserving the privacy of individual patients.
Public-private data partnerships could also be formed to leverage government and business datasets and enhance system performance. For instance, cities can integrate information from ride-sharing services with their own data on social service locations, bus lines, mass transit, and highway congestion. This integration would improve transportation planning, assist in mitigating traffic congestion, and facilitate highway and mass transit development initiatives in metropolitan areas. Such collaborations have the potential to address transportation challenges and optimize urban mobility effectively.
A combination of these approaches can significantly enhance data access for researchers, the government, and the business community, while respecting personal privacy. Ian Buck, the vice president of NVIDIA, highlights the importance of data as the driving force behind AI advancements, stating, “Data is the fuel that drives the AI engine. The federal government holds extensive information resources, and opening access to that data will enable transformative insights for the U.S. economy.” The federal government has already made substantial progress in this area through initiatives like Data.gov, making more than 230,000 datasets available to the public. This has spurred innovation, facilitated advancements in AI and data analytics, and contributed to societal benefits. Similarly, the private sector should also support research data access to unlock the full potential of artificial intelligence for the betterment of society.
Need Governments Investment in AI
Greg Brockman, co-founder of OpenAI, highlights that the United States government’s investment in non-classified AI technology is merely $1.1 billion. This amount falls significantly behind the investments made by China and other leading nations in this field of research. This notable shortfall is particularly significant considering the substantial economic benefits associated with AI. To promote economic growth and foster societal advancements, it is crucial for federal officials to ramp up their investment in artificial intelligence and data analytics. Increased funding in this area is expected to generate manifold returns in terms of economic prosperity and social progress.
Promote Digital Education and Quality Workforce Development
With the rapid expansion of AI applications across various sectors, it becomes imperative to reimagine our educational institutions to prepare students for a future where AI will be ubiquitous. Presently, many students do not receive adequate training in the skills required in an AI-driven world. There is a notable shortage of professionals in fields such as data science, computer science, engineering, coding, and platform development. The scarcity of these skills poses a significant challenge, as it hampers the progress of AI development.
To address this issue, both state and federal governments have begun investing in AI human capital. For instance, in 2017, the National Science Foundation provided funding for more than 6,500 graduate students in computer-related fields and initiated various initiatives to promote data and computer science education at all levels, from pre-K to higher and continuing education. The objective is to establish a robust talent pipeline of AI and data analytics professionals, enabling the United States to fully harness the benefits of the knowledge revolution.
However, there must also be significant changes in the learning process itself. In an AI-driven world, it is not only technical skills that are essential but also critical reasoning, collaboration, design thinking, visual information presentation, and independent thinking, among others. AI will reshape how society and the economy function, necessitating a holistic perspective on the implications for ethics, governance, and societal impact. Individuals will need the capacity to think broadly, addressing diverse questions and integrating knowledge from various disciplines.
One innovative initiative that exemplifies new approaches to prepare students for a digital future is IBM’s Teacher Advisor program. Leveraging Watson’s free online tools, this program assists teachers in incorporating the latest knowledge into their classrooms. It empowers educators to create fresh lesson plans in STEM and non-STEM subjects, discover relevant instructional videos, and optimize students’ learning experiences. These initiatives serve as precursors to the development of new educational environments that need to be established.
Create a Federal AI Advisory Committee
Federal policymakers need to carefully consider their approach to artificial intelligence (AI). As previously highlighted, there are numerous issues to address, ranging from enhancing data accessibility to tackling concerns of bias and discrimination. It is crucial that these and other related concerns are thoroughly examined to harness the full potential of this emerging technology.
To advance in this realm, several members of Congress have proposed the “Future of Artificial Intelligence Act,” a bill aimed at establishing comprehensive policy and legal principles for AI. The legislation suggests the creation of a federal advisory committee, under the purview of the secretary of commerce, to provide guidance on the development and implementation of artificial intelligence. The bill offers a framework through which the federal government can seek advice on promoting an environment conducive to investment and innovation, ensuring the global competitiveness of the United States, optimizing the utilization of AI to address potential changes in the workforce, supporting unbiased AI development and application, and safeguarding individuals’ privacy rights.
The committee tasked with addressing the implications of artificial intelligence (AI) is presented with a range of specific questions to consider. These include topics such as competitiveness, the impact on the workforce, education and ethics training, data sharing, international cooperation, accountability, machine learning bias, the effects on rural areas, government efficiency, investment climate, job impact, bias, and consumer impact. The committee’s mandate involves submitting a report to Congress and the administration within 540 days of the legislation’s enactment, outlining any necessary legislative or administrative actions pertaining to AI.
While this legislation represents a positive step forward, it is important to acknowledge the rapidly evolving nature of the field. Therefore, it is recommended to shorten the reporting timeline from 540 days to 180 days. Waiting nearly two years for the committee’s report would risk missing valuable opportunities and delaying crucial actions on pressing matters. Considering the swift progress in AI, expediting the committee’s analysis would prove highly advantageous.
Engage with State and Local Officials
States and localities are also taking proactive measures concerning artificial intelligence (AI). A notable example is the unanimous passage of a bill by the New York City Council, which mandates the formation of a task force responsible for monitoring the fairness and validity of algorithms employed by municipal agencies. These algorithms play a role in various areas, such as determining bail assignments for indigent defendants, establishing firehouse locations, placing students in public schools, assessing teacher performance, identifying Medicaid fraud, and predicting crime patterns.
The aim of the legislation’s developers is to enhance transparency and accountability in the utilization of AI algorithms within the city. They seek a comprehensive understanding of how these algorithms operate, while also addressing concerns about fairness and biases. Consequently, the task force has been given the mandate to analyze these issues and provide recommendations for future AI usage. The task force is scheduled to report its findings to the mayor, covering a range of AI policy, legal, and regulatory matters, by late 2019.
Certain observers express concerns that the forthcoming task force may not go far enough in ensuring algorithm accountability. One notable critique comes from Julia Powles, affiliated with Cornell Tech and New York University, who highlights that the initial version of the bill mandated companies to make their AI source code accessible to the public for inspection. Additionally, it required simulations of decision-making using real data. However, due to criticism of these provisions, former Councilman James Vacca decided to remove them and opted for a task force that would study these matters instead. Vacca and other city officials were apprehensive that disclosing proprietary algorithmic information could impede innovation and pose challenges in finding AI vendors willing to collaborate with the city. The task force’s ability to strike a balance among innovation, privacy, and transparency remains to be seen.
Regulate Board Objective more than specific algorithm
The European Union has adopted a stringent approach regarding data collection and analysis, imposing limitations on companies gathering data on road conditions and mapping street views.[63] Concerns arise from the potential inclusion of individuals’ personal information from unencrypted Wi-Fi networks in overall data collection. Consequently, the EU has levied fines on technology firms, requested data copies, and imposed constraints on data collection practices. These actions have created challenges for technology companies operating in the region, particularly in developing the high-definition maps required for autonomous vehicles.
The implementation of the General Data Protection Regulation (GDPR) in Europe has introduced significant restrictions on the utilization of artificial intelligence and machine learning. As outlined in published guidelines, the regulations prohibit automated decisions that have a “significant impact” on EU citizens. This includes techniques that assess various aspects of individuals’ lives, such as work performance, economic status, health, personal preferences, interests, reliability, behavior, location, and movements. Additionally, these new regulations grant citizens the right to review how digital services make algorithmic choices that affect them specifically.
Strict interpretation of these regulations poses challenges for European software designers, as well as American designers collaborating with their European counterparts, in integrating artificial intelligence and high-definition mapping into autonomous vehicles. Location tracking and movement monitoring are essential for navigation in these vehicles. Without access to high-definition maps containing geocoded data and the deep learning capabilities derived from such information, the advancement of fully autonomous driving in Europe will be hindered. By implementing these data protection measures and other related actions, the European Union is placing its manufacturers and software designers at a significant disadvantage compared to the rest of the world.
Instead of attempting to scrutinize the inner workings of specific algorithms by opening the “black boxes,” it would be more effective to focus on defining broad objectives for artificial intelligence and implementing policies that promote those objectives. Regulating individual algorithms excessively can impede innovation and create obstacles for companies seeking to leverage the benefits of artificial intelligence.
Take Biases Seriously
Addressing bias and discrimination is a critical concern when it comes to artificial intelligence (AI). Instances of unfair treatment based on historical data have already been observed, necessitating proactive measures to prevent the proliferation of such biases in AI systems. Existing regulations that govern discrimination in the physical realm should be expanded to encompass digital platforms. This expansion will not only safeguard consumers but also instill confidence in the overall integrity of these systems.
In order to encourage widespread adoption of AI advancements, greater transparency is essential regarding the inner workings of AI systems. Andrew Burt, from Immuta, highlights the significance of transparency, stating, “The main challenge faced by predictive analytics lies in transparency. In a world where data science operations are assuming increasingly important roles, the ability of data scientists to effectively explain the functionality of their models will be the key determinant of progress.”
Maintaining Mechanism for Human Oversight and Control
There is a growing consensus among some experts that mechanisms for human oversight and control should be established for AI systems. Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, advocates for the implementation of regulations to govern these systems. Etzioni proposes that AI should be subject to existing laws that have been developed for human behavior, encompassing areas such as cyberbullying, stock manipulation, and terrorist threats. Additionally, he emphasizes the importance of transparency, suggesting that AI systems should clearly disclose their automated nature, differentiating them from human beings. Furthermore, Etzioni asserts that AI systems should not retain or disclose confidential information without explicit approval from the source, recognizing the privacy risks associated with the vast amount of data these tools store. By adhering to these principles, Etzioni believes that AI can be effectively regulated while mitigating potential risks.
The IEEE Global Initiative has developed ethical guidelines for AI and autonomous systems that emphasize the incorporation of widely accepted human norms and rules into the programming of these models. Experts in the field suggest that AI algorithms should consider the significance of these norms, how conflicts between norms can be resolved, and the importance of transparency in the resolution of these conflicts. Ethical considerations require software designs to prioritize “nondeception” and “honesty.” Furthermore, when failures occur, appropriate mitigation mechanisms must be in place to address the resulting consequences. AI systems must be particularly attentive to issues such as bias, discrimination, and fairness, ensuring that these concerns are adequately addressed. By adhering to these ethical principles, AI and autonomous systems can promote responsible and considerate behavior in their operation.
A group of experts in machine learning have proposed the possibility of automating ethical decision-making. They used the classic ethical dilemma known as the trolley problem to explore this concept further. In the context of autonomous vehicles, they posed the question of whether the vehicle should prioritize the safety of its own passengers or pedestrians in the event of an unavoidable accident. To address this complex issue, they developed a “voting-based system” that gathered input from 1.3 million individuals who evaluated various scenarios. By summarizing the collective choices and considering the overall perspective of these participants, they were able to automate ethical decision-making in AI algorithms while incorporating public preferences. It is important to note that this approach does not eliminate the tragic nature of any fatality, as demonstrated in the Uber incident. However, it provides a mechanism for AI developers to integrate ethical considerations into their planning processes.
Penalize Malicious Behaviors and Promote Cyber Security
In the context of any emerging technology, it is crucial to discourage and prevent malicious treatment aimed at deceiving or exploiting software for undesirable purposes. This becomes particularly significant considering the dual-use nature of AI, where the same tool can be utilized for both beneficial and malicious intentions. The misuse of AI not only exposes individuals and organizations to unnecessary risks but also undermines the positive potential of this advancing technology. Such malicious behaviors may include hacking, manipulating algorithms, breaching privacy and confidentiality, or engaging in identity theft. It is imperative to impose substantial penalties to deter and discourage attempts to hijack AI for the purpose of obtaining confidential information or engaging in illicit activities.
In an ever-evolving world where numerous entities possess advanced computing capabilities, cybersecurity becomes a paramount concern. Nations must prioritize the protection of their own systems and prevent other countries from compromising their security. The U.S. Department of Homeland Security highlights the significance of this issue by citing an example: a major American bank receives approximately 11 million calls per week at its service center. To safeguard its telephony infrastructure from denial of service attacks, the bank employs a “machine learning-based policy engine” that effectively blocks over 120,000 calls each month. These blocks are based on voice firewall policies, including identifying and thwarting harassing callers, robocalls, and potential fraudulent calls. This utilization of machine learning exemplifies how it can contribute to the defense of technology systems against malicious attacks.
5. Conclusion
In summary, the world stands at the brink of a transformative era, driven by the advancements in artificial intelligence and data analytics. We are witnessing remarkable implementations of these technologies in various sectors such as finance, national security, healthcare, criminal justice, transportation, and smart cities. These deployments have already reshaped decision-making processes, business models, risk management approaches, and overall system performance, resulting in significant economic and societal advantages. However, the unfolding of AI systems carries profound implications for society at large.
It is crucial to address policy considerations, reconcile ethical dilemmas, resolve legal challenges, and determine the appropriate level of transparency in AI and data analytic solutions. The choices made by humans in software development profoundly impact how decisions are formulated and integrated into organizational practices. Understanding the intricacies of these processes is of utmost importance, as they will inevitably have a substantial impact on the general public both in the near future and for years to come. AI has the potential to be a revolution in human affairs, perhaps even becoming the most influential innovation in human history.
Read more about Technology
0 Comments