Artificial Intelligence in Intensive Care Medicine: Bibliometric Analysis

[ad_1]


Background:

Interest in critical care-related artificial intelligence (AI) research is growing rapidly. However, the literature is still lacking in comprehensive bibliometric studies that measure and analyze scientific publications globally.


Objective:

The objective of this study was to assess the global research trends in AI in intensive care medicine based on publication outputs, citations, coauthorships between nations, and co-occurrences of author keywords.


Methods:

A total of 3619 documents published until March 2022 were retrieved from the Scopus database. After selecting the document type as articles, the titles and abstracts were checked for eligibility. In the final bibliometric study using VOSviewer, 1198 papers were included. The growth rate of publications, preferred journals, leading research countries, international collaborations, and top institutions were computed.


Results:

The number of publications increased steeply between 2018 and 2022, accounting for 72.53% (869/1198) of all the included papers. The United States and China contributed to approximately 55.17% (661/1198) of the total publications. Of the 15 most productive institutions, 9 were among the top 100 universities worldwide. Detecting clinical deterioration, monitoring, predicting disease progression, mortality, prognosis, and classifying disease phenotypes or subtypes were some of the research hot spots for AI in patients who are critically ill. Neural networks, decision support systems, machine learning, and deep learning were all commonly used AI technologies.


Conclusions:

This study highlights popular areas in AI research aimed at improving health care in intensive care units, offers a comprehensive look at the research trend in AI application in the intensive care unit, and provides an insight into potential collaboration and prospects for future research. The 30 articles that received the most citations were listed in detail. For AI-based clinical research to be sufficiently convincing for routine critical care practice, collaborative research efforts are needed to increase the maturity and robustness of AI-driven models.


Keywords:

artificial intelligence; bibliometric analysis; intensive care medicine; machine learning; sepsis.

[ad_2]

Source link

Artificial intelligence, law experts build algorithm that predicts length of court sentences

[ad_1]

By Andrew Lensen and Marcin Betkier for The Conversation

The rapid development of artificial intelligence (AI) has led to its deployment in courtrooms overseas. 

In China, robot judges decide on small claim cases, while in some Malaysian courts, AI has been used to recommend sentences for offences such as drug possession.

Is it time New Zealand considers AI in its own judicial system?

Intuitively, we do not want to be judged by a computer. And there are good reasons for our reluctance – with valid concerns over the potential for bias and discrimination.

But does this mean we should be afraid of any and all use of AI in the courts?

In our current system, a judge sentences a defendant once they have been found guilty. Society trusts judges to hand down fair sentences based on their knowledge and experience.

But sentencing is a task AI may be able to perform instead – after all, AI machines are already used to predict some criminal behaviour, such as financial fraud. Before considering the role of AI in the courtroom, then, we need a clear understanding of what it actually is.

AI simply refers to a machine behaving in a way that humans identify as “intelligent”. Most modern AI is machine learning, where a computer algorithm learns the patterns within a set of data.

For example, a machine learning algorithm could learn the patterns in a database of houses on Trade Me in order to predict house prices.

So, could AI sentencing be a feasible option in New Zealand’s courts? What might it look like? Or could AI at least assist judges in the sentencing process?

[ad_2]

Source link

‘Artificial Intelligence of the Future’ Award to the General Directorate of Customs Enforcement

[ad_1]

The General Directorate of Customs Protection Awarded the Artificial Intelligence of the Future Award
‘Artificial Intelligence of the Future’ Award to the General Directorate of Customs Enforcement

The Ministry of Commerce, General Directorate of Customs Enforcement won the first prize in the “Artificial Intelligence of the Future” category at the IDC Digital Transformation Awards with its MUHAFIZ Project.

According to the statement made by the Ministry, the Digital Transformation Summit, where all the leading public and private sector business units and senior executives of Turkey were brought together by the IT sector research and consultancy company International Data Corporation (IDC) Turkey, was held in Sapanca on 23-24 November. . At the summit, different technologies were discussed with sessions on many main topics such as Future Ready Organizations, Future of Work, Digital Transformation, Innovation Management, Cloud Computing, Digital Economy, Internet of Things, Cyber ​​Security, Big Data and Analytics, Artificial Intelligence, Robotic Process Automation.

At the Digital Transformation Award Ceremony, organized as part of Turkey’s most comprehensive technology event, awards were given in 13 categories to institutions and managers who ranked high with their projects realized throughout the year. As a result of the evaluation of 108 applications from 478 institutions by IDC analysts and jury members, first, second and third place awards were found in 13 categories.

At the IDC Digital Transformation Awards, one of the most prestigious awards in the industry, where projects and initiatives leading digital transformation initiatives are evaluated, the Ministry of Commerce General Directorate of Customs Enforcement won the first prize in the “Best in Future of Intelligence” category with its “MUHAFIZ, Artificial Intelligence in Big Data” project. seen.

The General Directorate of Customs Enforcement makes maximum use of technological opportunities in the fight against smuggling. Thanks to the tools and container scanning systems used at the customs gates, as well as the other technical equipment available to the personnel, and the state-of-the-art software and programs used to prevent illegal trade without disrupting legal trade in the face of increasing trade volume, analysis focused on combating smuggling was carried out in a very short time, and risk-based controls were realized. .

“What does the GUARD program do?”

The MUHAFIZ Program is designed to use state-of-the-art artificial intelligence in big data in order to determine within seconds which vehicles, goods and passengers should be controlled within the scope of the fight against smuggling, and to reveal the relationships and risks that are very difficult to reveal with classical methods and analysis in this way. MUHAFIZ Program, which allows the information in large databases to be analyzed with artificial intelligence, makes pinpoint determinations by considering all kinds of risks related to illegal trade that may be carried out by land, air and sea, transfers the personnel experience to the system in a sustainable and permanent way, and beyond that, thanks to mathematical algorithms. It helps uncover organized crime attempts by making connections that are intricately impossible to be made with the human brain.

[ad_2]

Source link

Artificial intelligence, law experts build algorithm that predicts length of court sentences

[ad_1]

By Andrew Lensen and Marcin Betkier for The Conversation

The rapid development of artificial intelligence (AI) has led to its deployment in courtrooms overseas. 

In China, robot judges decide on small claim cases, while in some Malaysian courts, AI has been used to recommend sentences for offences such as drug possession.

Is it time New Zealand considers AI in its own judicial system?

Intuitively, we do not want to be judged by a computer. And there are good reasons for our reluctance – with valid concerns over the potential for bias and discrimination.

But does this mean we should be afraid of any and all use of AI in the courts?

In our current system, a judge sentences a defendant once they have been found guilty. Society trusts judges to hand down fair sentences based on their knowledge and experience.

But sentencing is a task AI may be able to perform instead – after all, AI machines are already used to predict some criminal behaviour, such as financial fraud. Before considering the role of AI in the courtroom, then, we need a clear understanding of what it actually is.

AI simply refers to a machine behaving in a way that humans identify as “intelligent”. Most modern AI is machine learning, where a computer algorithm learns the patterns within a set of data.

For example, a machine learning algorithm could learn the patterns in a database of houses on Trade Me in order to predict house prices.

So, could AI sentencing be a feasible option in New Zealand’s courts? What might it look like? Or could AI at least assist judges in the sentencing process?

[ad_2]

Source link

Artificial Intelligence (AI) Researchers At UC Berkeley Propose A Method To Edit images From Human Instructions

[ad_1]

Machine Learning (ML), or more precisely, Deep Learning (DL), has revolutionized the field of Artificial Intelligence (AI) and made tremendous breakthroughs in numerous areas, including computers. DL is a branch of ML that uses deep neural networks, i.e., neural networks composed of multiple hidden layers, to accomplish tasks that were previously impossible. It has opened up a whole new world of possibilities, allowing machines to “learn” and make decisions in ways not seen before. Regarding computer vision, DL is nowadays the most powerful tool for image generation and editing.

As a matter of fact, DL models are nowadays capable of creating realistic photographs from scratch in the style of a particular artist, making images look older or younger than they truly are, or exploiting textual descriptions with text-attention mechanisms to guide the generation. A very known example is Stable Diffusion, a text-to-image generation model recently released in version 2.0.

Several image editing tasks, such as in-painting, colorization, and text-driven transformations, are already performed successfully by DL end-to-end architectures. In particular, text-driven image editing has recently attracted interest from a vast public.

In the original formulation, image editing models traditionally targeted a single editing task, usually style transfer. Other methods encode the images into vectors in the latent space and then manipulate these latent vectors to apply the transformation.

Recently other publications have focused on pretrained text-to-image diffusion models for image editing. Although some of these models have the ability to modify images, in most cases, they offer no guarantees that similar text prompts will yield similar outcomes, as clear from the results presented later on.

The idea and innovation introduced by the presented approach, termed InstructPix2Pix, is the consideration of instruction-based image editing as a supervised learning problem. The first task is the generation of pairs composed of text editing instructions and images before/after the edit. The following step is the supervised training of the proposed diffusion model on this generated dataset. Precisely, the model architecture is summarized in the figure below.

The first part (Training Data Generation in the figure) involves two large-scale pretrained models that operate on different modalities: a language model and a text-to-image model. For the language model, GTP-3 has been exploited and fine-tuned on a small human-written dataset of 700 editing triplets: input captions, edit instructions, and output captions. The final dataset generated by this model includes more than 450k triplets, which are used to guide the editing process. Still, we only have text tuples, but we need images to train the diffusion model. At this point, Stable Diffusion and Prompt2Prompt are utilized to generate appropriate images from these text triplets. In particular, Prompt2Promt is a recent technique that helps achieve great similarity within the pairs of generated images through a cross-attention mechanism. This solution is certainly to be encouraged since the idea is to edit or alter a portion of the input image and not create a completely different one.

The second part (Instruction-following Diffusion Model in the figure) refers to the proposed diffusion model, which aims at producing a transformed image according to an editing instruction and an input image.

The structure is equivalent to the notorious latent diffusion models. Diffusion models learn to generate data samples through a sequence of denoising autoencoders that estimate the input data distribution. Latent diffusion improves the efficiency of diffusion models by operating in the latent space of a pretrained variational autoencoder. 

The idea behind diffusion models is rather trivial. The diffusion process starts by adding noise to an input image or an encoded latent vector representing the image. Using the text-attention mechanism, denoisers are applied to the noisy image to obtain a much clearer and more detailed result. This was a summary of InstructPix2Pix, a novel text-driven approach to guide image editing. You can find additional information in the links below if you want to learn more about it.


Check out the Paper and Project Page. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.


Daniele Lorenzi received his M.Sc. in ICT for Internet and Multimedia Engineering in 2021 from the University of Padua, Italy. He is a Ph.D. candidate at the Institute of Information Technology (ITEC) at the Alpen-Adria-Universität (AAU) Klagenfurt. He is currently working in the Christian Doppler Laboratory ATHENA and his research interests include adaptive video streaming, immersive media, machine learning, and QoS/QoE evaluation.



[ad_2]

Source link

Q3 2022 Artificial Intelligence & Machine Learning Report

[ad_1]

Fill out the form to download a preview of this report. The full report is available through the PitchBook Platform.

Move aside, crypto: Generative AI took tech VC attention in Q3

Hype around text-to-image AI models in Q3 2022 drew rapid VC investment in the generative AI space. But this did not stop deal activity from falling for the overall AI & ML vertical, where deal value fell 46.7% quarter-over-quarter.

In our latest Artificial Intelligence & Machine Learning Report, PitchBook tracks 68 product categories, and only 20 are on pace to grow in VC funding in 2022, led by intelligent robotics, supply-chain optimization, and conversational AI.

The report maps out the market and notes the emerging opportunities and startups to watch. Generative AI, neural search, and inference semiconductors are finding traction and scooping up deals in the current market. Company spotlights on AI unicorns Databricks and DataRobot show their evolving M&A and partnership priorities.

Table of contents

Vertical overview 3
Q3 2022 timeline 4
AI & ML landscape 5
AI & ML VC ecosystem market map 6
VC activity 7
Emerging opportunities 14
Generative AI 15
Neural search 18
Inference semiconductors 21
Select company highlights 24
Databricks 25
DataRobot 28

[ad_2]

Source link

An interview with Covington & Burling discussing artificial intelligence in the European Union

[ad_1]

This article is an extract from GTDT Market Intelligence Artificial Intelligence 2022. Click here for the full guide.


1 What is the current state of the law and regulation governing AI in your jurisdiction? How would you compare the level of regulation with that in other jurisdictions?

Currently, the European Union does not have laws or regulations that specifically regulate AI systems. However, there are a number of existing laws and regulations – both horizontal and sector-specific – that apply to AI technologies and applications. Perhaps most important is the EU’s General Data Protection Regulation (GDPR), which sets out a range of prescriptive obligations that apply to the processing of personal data, including personal data processed in the context of training, testing and deploying AI applications.

The GDPR also includes transparency and other obligations relating to automated decision-makings. Other EU laws in this vein include the Better Enforcement Directive, which requires traders to inform consumers when prices of goods and services have been personalised based on automated decision-making and profiling, and the Platform-to-Business Regulation, which requires that online intermediation service providers and search engine providers be transparent about the algorithms used to rank business users and corporate websites on its services.

Other EU legal frameworks that may apply to AI applications, depending on the context, include medical devices rules, financial services regulations, cybersecurity laws, copyright and other intellectual property rules and consumer protection law.

As described below, the EU is currently considering AI-specific legislation. In that regard, the EU is fairly advanced in its consideration of the unique legal issues that can arise in the context of the development and deployment of AI systems.

2 Has the government released a national strategy on AI? Are there any national efforts to create data sharing arrangements?

European strategy on AI

In 2018, the European Commission (EC) published a Coordinated Plan on Artificial Intelligence, which set out a joint commitment by the EC and the member states to work together to encourage investments in AI technologies, develop and act on AI strategies and programmes, and align AI policy to reduce fragmentation across jurisdictions.

In April 2021, the EC conducted a review of the progress on the 2018 Coordinated Plan, and adopted an updated plan with the following additional policy objectives:

  • set enabling conditions for AI development and uptake in the EU;
  • make the EU the place where excellence thrives from the lab to market;
  • ensure that AI works for people and is a force for good in society; and
  • build strategic leadership in high-impact sectors.

The EC has also proposed that the EU invests at least €1 billion per year from the Horizon Europe and Digital Europe programmes in AI.

At the national level, a 2022 review found that 24 of the 27 EU member states have adopted national strategies on AI – and that the remaining member states are working on national strategies that are expected to be published soon.

The EU has also been actively considering legislation that will regulate AI technologies. These include the following (discussed later in this chapter):

  • the proposed Regulation Laying Down Harmonised Rules on AI (the AI Act Proposal); and
  • the proposed Directive on Adapting Non-Contractual Civil Liability rules to Artificial Intelligence (the AI Liability Directive Proposal).

European data sharing policy

European policymakers recognise that access to data is an important requirement to enable the growth of AI technologies. In 2020, the EC published a Communication on Shaping Europe’s Digital Future and a European Strategy for Data. The Communication recommended enhancing regulatory frameworks to, among other objectives, encourage and enable data sharing.

Over the past year, the EC has adopted legislation aimed at furthering the European strategy for data:

  • In June 2022, the EU adopted its Regulation on European Data Governance (the Data Governance Act). The Data Governance Act includes a range of measures designed to promote the reuse of public sector data and establishes a European Data Innovation Board, among other things.
  • In September 2022, the EU adopted its Regulation on Contestable and Fair Markets in the Digital Sector (the Digital Markets Act). The Digital Markets Act introduces measures to regulate online ‘gatekeepers’. One of the obligations in the Digital Markets Act requires gatekeepers to make available to business users data ‘provided for or generated in the context of’ the business user’s use of the gatekeeper’s services.

The EU institutions are currently reviewing several additional legislative proposals that are also aimed at furthering the European strategy for data. These include the following:

  • In February 2022, the EC published the proposed Regulation on Harmonised Rules on Fair Access to and Use of Data (the Data Act). The Data Act includes provisions designed to give users of certain specified products and related rights to access and port data generated by their use. The Data Act also seeks to lower the barriers to users for switching between different data processing services.
  • In May 2022, the EC published the proposed Regulation for the European Health Data Space. If adopted, this proposal will create a common EU data space for health data, with the ultimate aim of (1) empowering individuals to control and utilise their own health data in their home country and in other member states, and (2) furthering research, innovation, policy-making and regulatory activities within the health sector.

UK’s innovation-friendly approach

Separate from the EU, the UK government in September 2021 adopted its own National AI Strategy. The UK government’s strategy is focused on adopting an innovation-friendly approach to AI regulation. The UK government followed this Strategy, in July 2022, with a proposal for a new AI rulebook that sets out six AI-related principles. These ‘core principles’ will require developers and users of AI to:

  • ensure that AI is used safely;
  • ensure that AI is technically secure and functions as designed;
  • make sure that AI is appropriately transparent and explainable;
  • consider fairness;
  • identify a legal person to be responsible for AI; and
  • clarify routes to redress or contestability.

The UK government envisages that these core principles will form the basis for sector-specific guidelines to be developed by industry, academia and regulators.

3 What is the government policy and strategy for managing the ethical and human rights issues raised by the deployment of AI?

In April 2021, the EC published its proposal for an AI Act. The AI Act Proposal is the first EU legislative proposal that is designed specifically and exclusively to regulate the development, deployment and use of AI systems. The AI Act Proposal adopts a risk-based approach to regulation, imposing the most extensive obligations on providers of ‘high-risk’ AI systems – and prohibiting certain types of AI outright. Certain types of non-high-risk AI systems will also be subject to transparency obligations.

The AI Act Proposal has been the subject of significant scrutiny and debate during the legislative process, and while the final Act is likely to broadly track the EC Proposal, it is likely to have some meaningful differences in the obligations it imposes.

Prohibited AI systems

The AI Act Proposal would ban certain types of AI systems from being placed on the EU market, put into service or used in the EU. These include AI systems that either deploy subliminal techniques (beyond a person’s consciousness) to materially distort a person’s behaviour, or exploit the vulnerabilities of specific groups (such as children or persons with disabilities), in both cases where physical or psychological harm is likely to occur. The AI Act Proposal would also prohibit public authorities from placing on the market, putting into service or using AI systems in the EU for ‘social scoring’, where this leads to detrimental or unfavourable treatment in social contexts unrelated to the contexts in which the data was generated, or is otherwise unjustified or disproportionate. Finally, the AI Act Proposal bans law enforcement from using ‘real-time’ remote biometric identification systems in publicly accessible spaces, subject to limited exceptions (eg, searching for specific potential victims of crime, preventing imminent threats to life or safety or identifying specific suspects of significant criminal offences).

High-risk AI systems

The AI Act Proposal would also classify certain AI systems as high-risk, and subject those systems to more extensive regulation. Prior to placing a ‘high-risk AI system’ on the EU market or putting it into service, providers are required to conduct a conformity assessment procedure (either self-assessment or third-party assessment depending on the type of AI system) of their systems. To demonstrate compliance, providers must draw up an EU declaration of conformity and affix the CE marking of conformity to their systems.

The types of AI systems considered high-risk are enumerated exhaustively in Annexes II and III of the AI Act Proposal, and include AI systems that are, or are safety components of, certain regulated products (eg, medical devices, motor vehicles) and AI systems that are used in certain specific contexts or for specific purposes (eg, biometric identification systems, systems for assessing students in educational or vocational training).

The AI Act Proposal also requires that providers of high-risk AI systems ensure that their AI systems meet certain substantive obligations. Among them, providers must design high-risk AI systems to enable record-keeping; allow for human oversight aimed at minimising risks to health, safety or fundamental rights; and achieve an appropriate level of accuracy, robustness and cybersecurity. Data used to train, validate or test such systems must meet quality criteria, including for possible biases, and be subject to specified data governance practices. Providers must prepare detailed technical documentation, provide specific information to users and adopt comprehensive risk management and quality management systems.

The AI Act Proposal also imposes obligations on importers and distributors of AI systems, to ensure that high-risk AI systems have undergone the conformity assessment procedure and bear the proper conformity marking before being placed on the market, as well as obligations on users of such systems.

Non-high-risk AI systems

The AI Act Proposal would also introduce transparency obligations on certain non-high-risk AI systems, as follows:

  • Providers of AI systems intended to interact with natural persons must develop them in such a way that people know they are interacting with the system.
  • Providers of ‘emotion recognition’ and ‘biometric categorisation’ AI systems must inform people who are exposed to them of their nature.
  • Providers of AI systems that generate or manipulate images, audio or video content must disclose to people that the content is not authentic.

For other non-high-risk AI systems, the AI Act Proposal also encourages providers to create codes of conduct to foster voluntary adoption of the obligations that apply to high-risk AI systems.

Member state guidance on AI ethics

At the member state level, national strategies on AI address the ethical and human rights implications of AI. Like the EC, many member states have established independent bodies tasked with advising on ethical issues raised by AI. These include Germany’s Data Ethics Commission and France’s National Consultative Committee for Ethics. In the UK, the UK’s Centre for Data Ethics and Innovation and the UK government’s Office for AI publish guidance relating to AI ethics.

4 What is the government policy and strategy for managing the national security and trade implications of AI? Are there any trade restrictions that may apply to AI-based products?

On 9 September 2021, the EU’s recast of the Dual-Use Regulation entered into force. While export controls under the previous EU dual use regulation applied to certain AI-based products, such as those that use encryption software, and any AI products that are specifically designed for a military end use, the updated Dual-Use Regulation broadens the scope of the controls and implements more extensive requirements for cyber-surveillance related goods, software and technology, and military-related technical assistance activities.

5 How are AI-related data protection and privacy issues being addressed? Have these issues affected data sharing arrangements in any way?

The GDPR applies to all processing of personal data, including in the context of AI systems. The GDPR imposes, among other obligations, requirements on data controllers to be transparent about their processing, identify a legal basis for the processing, comply with data subject rights, keep personal data secure and keep records to demonstrate compliance with the GDPR.

Notably, the GDPR includes specific requirements on fully automated decision-making (ADM) that has legal or similarly significant effects on individuals (article 22). This provision is likely to be particularly relevant to AI-based algorithmic decision-making processes. Under the GDPR, individuals have the right not to be subject to ADM unless the processing is based on the individual’s explicit consent, is necessary for performance of a contract between the organisation and the individual or is authorised by member state or EU law. Even when these conditions are met, organisations must provide individuals with ‘meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing’ (article 13(2)(f)). Organisations carrying out ADM must also implement safeguards, including, at a minimum, the right to contest the decision and obtain human review of the decision (article 22(3)).

The GDPR will also govern the sharing of personal data between multiple organisations where sharing of personal data is required to develop or deploy an AI application. These rules include ensuring that any joint controllers of the personal data set out their respective roles and responsibilities for compliance with the GDPR in a transparent way (article 26), and also require that controllers put in place data processing agreements with their processors (article 28). Any cross-border transfers of personal data from within the EU to outside the EU will also be subject to the GDPR’s rules on international data transfers (Chapter V).

In addition, the development and deployment of AI technologies in certain contexts may also trigger the requirement to carry out a mandatory data protection impact assessment (article 35), which will require organisations to carry out an in-depth review of their data protection compliance specific to the project.

A number of member state data protection authorities (DPAs) have taken an interest in the application of the GDPR to AI. In May 2022, for example, the European Data Protection Board, which brings together all 27 member state DPAs, published guidelines on facial recognition technology in the area of law enforcement, which is awaiting adoption following a public consultation. The UK Information Commissioner’s Office (ICO) has also published guidance documents regarding the application of data protection principles to AI. Other DPAs, including the French CNIL, the Norwegian Datatilsynet and the Spanish AEPD, have issued guidance on AI and data protection.

6 How are government authorities enforcing and monitoring compliance with AI legislation, regulations and practice guidance? Which entities are issuing and enforcing regulations, strategies and frameworks with respect to AI?

As there is currently no AI-specific legislation in Europe, government authorities do not yet have the power to enforce and monitor compliance with AI-specific legislation. However, once the AI Act Proposal is implemented, violations of the AI Act Proposal may be subject to fines of up to €30 million or 6 per cent of a company’s worldwide annual turnover (whichever is higher).

To the extent that existing laws and regulations apply to AI applications, government authorities have been exercising their powers under these rules in relation to AI applications. As noted in question 5, some member state DPAs have issued AI-specific guidance in relation to data protection law compliance. Infringements of GDPR could result in fines of up to €20 million or 4 per cent of a company’s worldwide annual turnover (whichever is higher), depending on the provisions infringed.

Further, a number of DPAs have recently taken enforcement actions focused on specific AI use cases, particularly relating to facial recognition technology (FRT) used for surveillance purposes. For example, the Swedish DPA in February 2021 fined the Swedish police for using FRT to identify individuals, and in August 2019 fined the Skellefteå municipality for using FRT to track student attendance in a state school.

In the UK, the use of FRT systems by law enforcement for policing and security purposes was also the subject of a human rights challenge before the English High Court (R (Bridges) v Chief Constable of South Wales Police [2019] WLR (D) 496 (UK)) and Court of Appeal (R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058), and led the UK ICO to subsequently issue an opinion on the use of live FRT by law enforcement in public places. In November 2021, the UK ICO concluded an investigation into Clearview AI’s facial recognition technologies, and fined Clearview AI more than £7.5 million for privacy violations (a reduction from the provisional fine of £17 million). The ICO also ordered the company to delete the data of UK residents from its systems. Subsequently, (1) the French CNIL similarly found that Clearview AI’s facial recognition software breached GDPR and imposed a fine of €20 million and ordered Clearview AI to cease data collection in France, (2) the Italian DPA fined Clearview AI €20 million and ordered the deletion of data of Italian citizens, and (3) the Greek DPA fined Clearview AI €20 million and ordered the deletion of data of Greek citizens. Since many AI applications involve the processing of personal data, we expect DPAs to play an important role in monitoring AI applications.

7 Has your jurisdiction participated in any international frameworks for AI?

The EU has been a thought leader in the international discourse on ethical frameworks for AI. The AI HLEG’s 2019 AI Ethics Guidelines were, at the time, one of the most comprehensive examinations on AI ethics issued worldwide, and involved a number of non-EU organisations and several government observers in its drafting. In parallel, the EU was also closely involved in developing the OECD’s ethical principles for AI and the Council of Europe’s Recommendation on the Human Rights Impacts of Algorithmic Systems. The EU also forms part of the Global Partnership on AI (GPAI).

At the United Nations, the EU is involved in the report of the High-Level Panel on Digital Cooperation, including its recommendation on AI. The EC recognises that AI can be a driving force to achieve the UN Sustainable Development Goals and advance the 2030 agenda.

The EC states in its 2020 AI White Paper that the EU will continue to cooperate with like-minded countries and global players on AI, based on an approach that promotes the respect of fundamental rights and European values. Also, article 39 of the EC’s AI Act Proposal provides a mechanism for qualified bodies in third countries to carry out conformity assessments of AI systems under the Act.

On 1 September 2021, the EC announced an international outreach for human-centric AI project (InTouchAI.eu) to promote the EU’s vision on sustainable and trustworthy AI. The aim is to engage with international partners on regulatory and ethical matters and promote responsible development of trustworthy AI at a global level. This includes facilitating dialogue and joint initiatives with partners, conducting public outreach and technology diplomacy and conducting research, intelligence gathering and monitoring of AI developments. Also, at the first meeting of the US–EU Trade and Technology Council on 29 September 2021, the United States and EU ‘affirmed their willingness and intention to develop AI systems that are innovative and trustworthy and that respect universal human rights and shared democratic values’. The participants also established 10 working groups to collaborate on projects furthering the development of trustworthy AI. This collaborative approach continued in the second meeting of the US–EU Trade and Technology Council on 15–16 May 2022, where the United States and EU agreed to develop shared methodologies for measuring AI trustworthiness and risks.

The EU member states have also been active in the Council of Europe. On 3 November 2021, the Council of Europe published a Recommendation on the Protection of Individuals with regard to Automatic Processing of Personal Data in the context of profiling, which defines ‘profiling’ as ‘any form of automated processing of personal data, including machine learning systems, consisting in the use of data to evaluate certain personal aspects relating to an individual, particularly to analyse or predict that person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements’. The recommendation encourages Council of Europe member states to promote and make legally binding the use of a ‘privacy by design’ approach in the context of profiling, and sets out additional safeguards to protect personal data, the private life of individuals, and fundamental rights and freedoms such as human dignity, privacy, freedom of expression, non-discrimination, social justice, cultural diversity and democracy.

The UK is also actively participating in the international discourse on norms and standards relating to AI. It continues to engage with the OECD, Council of Europe, United Nations and the GPAI.

8 What have been the most noteworthy AI-related developments over the past year in your jurisdiction?

On 28 September 2022, the EC published its proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (the AI Liability Directive Proposal). The AI Liability Directive Proposal sets out harmonised rules on (1) the disclosure or preservation of information regarding high-risk AI systems and the standard of proof required to compel the same, and (2) the burden of proof, and corresponding rebuttable presumptions, applicable to claim for damages caused by AI systems.

The AI Liability Directive Proposal gives courts the power to order providers or users of high-risk AI systems to disclose (or preserve) information about their systems to persons who seek this information to initiate (or decide whether to initiate) redress proceedings against the provider or user. A court may issue such an order upon the request of (1) a ‘potential claimant’, who has already requested this information directly from the provider or user but not received it, or (2) a claimant who has initiated proceedings. The requestor must present facts and evidence ‘sufficient to support the plausibility of a claim’ that the high-risk AI system caused the alleged damage.

Courts will only order a provider or user to disclose as much information as is necessary and proportionate to support a (potential) claim for damages. The court will take into account the legitimate interests of all parties, including any trade secrets. If a disclosure order covers information that is considered a trade secret which a court deems confidential pursuant to the EU Trade Secret Directive, the court may take measures necessary to preserve the confidentiality of that information during the proceedings. If the provider or user does not comply with the court’s order to disclose information, the court may assert a rebuttable presumption that the provider or user breached a duty of care, including that they failed to comply with the provisions of the AI Act that the requestor alleges were violated.

In addition, the AI Liability Directive Proposal identifies a number of circumstances in which a court may presume a (causal) link between (1) the fault of the provider or user of any AI system (whether high-risk or not), and (2) the output produced by the AI system or its failure to produce such an output. For high-risk AI systems, this presumption applies if the claimant has demonstrated the provider or user’s non-compliance with certain obligations under the AI Act, subject to certain exceptions and restrictions. For example, the presumption will not apply if the court finds that the claimant has sufficient evidence and expertise to prove a causal link.

9 Which industry sectors have seen the most development in AI-based products and services in your jurisdiction?

AI uptake has increased across the EU market in a range of sectors, including in the health and transport sectors and by law enforcement.

The use of computer vision to power FRT systems for surveillance, identity verification and border control has been a notable development in the EU, raising a number of data protection law-related concerns, as discussed in the response to question 6. The use of other biometric identification systems, such as voice recognition technology, has also proliferated. Biometric identification technology can be seen in many forms – from voice authentication systems for internet banking to smart speakers for home use.

The digital health sector has also seen an increase in AI-powered solutions, including apps that diagnose diseases, software tools for those with chronic ailments, platforms that facilitate communication between patients and healthcare providers, virtual or augmented reality tools that help administer healthcare and research projects involving analysis of large data sets (eg, genomics data). The advances in autonomous vehicles would not be possible without the development of AI systems, and autonomous vehicles must implement multiple, complex interrelated AI systems to deal with the different aspects of autonomous vehicles (eg, localisation, scene understanding, planning, control and user interaction) in order to improve safety, mobility and the environment.

10 Are there any pending or proposed legislative or regulatory initiatives in relation to AI?

As discussed above, the EU is currently considering two significant AI-related legislative proposals, the AI Act and the AI Liability Directive. The AI Act was proposed in April 2021, and is far advanced in the legislative process, with adoption possible in 2023. The AI Liability Directive was proposed in September 2022, and is still in the early stages of the legislative process.

11 What best practices would you recommend to assess and manage risks arising in the deployment of AI?

Companies developing or deploying AI applications in the EU should be mindful that a number of laws and regulations may apply to their AI application – including, but not limited to, those discussed in the preceding responses. Companies would be well advised to ensure compliance with these laws and look to government authorities that are responsible for enforcement in their sector for any sector-specific guidance on how these laws apply to AI applications. Companies should also closely monitor legislative developments, and consider participating in the dialogue with policymakers on AI legislation to inform legislative efforts in this area.


The Inside Track

What skills and experiences have helped you to navigate AI issues as a lawyer?

At Covington, we have been working with leading technology and internet companies for decades, and we have a deep understanding of the sector and of technology and digital products and services. Throughout that period, we have helped clients navigate the full range evolving legal landscapes applicable to their innovations. We take a multi-disciplinary approach, and as a firm, we are also focused on collaboration across our lawyers and on bringing the best team to any given matter; this is essential when advising on AI-related projects, because those projects often raise issues under multiple legal regimes. We also work closely together across offices, which again is important given the global nature of our clients’ services and solutions.

Which areas of AI development are you most excited about and which do you think will offer the greatest opportunities?

The development of AI technology is affecting virtually every industry and has tremendous potential to promote the public good. In the healthcare sector, for example, AI will continue to have an important role in helping to mitigate the effects of covid-19, along with potentially improving health outcomes while reducing costs. AI also has the potential to enable more efficient use of energy and other resources and to improve education, transportation, and the health and safety of workers. We are excited about these and many other opportunities presented by AI.

What do you see as the greatest challenges facing both developers and society as a whole in relation to the deployment of AI?

AI has tremendous promise to advance economic and public good in many ways and it will be important to have policy frameworks that allow society to capitalise on these benefits and safeguard against potential harms. As this publication explains, several jurisdictions are advancing different legal approaches with respect to AI. One of the great challenges is to develop harmonised policy approaches that achieve desired objectives. We have worked with stakeholders in the past to address these challenges with other technologies, and we are optimistic that workable approaches can be crafted for AI.

[ad_2]

Source link

Voya Celebrates Success of 24/7 Chatbot and Artificial Intelligence

[ad_1]

Voya Financial

One year since launch, Voya PAL expands upon Voya’s customer experience offerings, engaging in more than 120,000 interactions

Voya Financial, Inc, announced today the one-year milestone of Voya PAL, an intelligent chatbot providing customers with 24/7 service availability for their workplace benefits and savings needs. Voya PAL was introduced as part of the company’s ongoing focus on expanding customer experiences and, ultimately, to provide multiple options for customers to connect with Voya and in ways that are most helpful to them. Since its launch one year ago, Voya PAL has engaged in more than 120,000 customer interactions across Voya’s Wealth Solutions and Health Solutions businesses, and it has resolved over 70% of cases using the chatbot functionality to address customers’ questions.

“As the retirement industry continues to advance its use of digital technology, providing a continuous expansion of solutions for our customers is critical,” said Heather Lavallee, president and CEO-elect, Voya Financial. “At Voya, we are continuously optimizing and expanding our suite of digital solutions to support our customers when, where and how they need it. As we continue to build upon the customer experience with new capabilities, the launch of Voya PAL has provided a great opportunity for us to leverage the latest advancements in technology to not only elevate the customer experience but, ultimately, help drive better outcomes as a result.”

With advancements in conversational artificial intelligence (AI), the use of digital assistants and chatbots in various industries, including financial services, continues to increase. Offering 24/7, fast and efficient information, chatbots use AI to hold real-time conversations with customers, enhancing the customer experience by providing convenient and direct engagement with individuals. Proving the growing interest in this capability, according to Voya’s data, in the third quarter of 2022 alone, there were nearly 50,000 Voya PAL chat engagements with nearly three-quarters (72%) fully resolved via the chatbot experience.1

How PAL works

Voya PAL’s capabilities include leveraging real-time AI that enables the chatbot to quickly understand customer intent and provide an easy, intuitive interaction experience to them. Specifically, Voya plan participants are offered a pre-login chat experience that serves as an efficient digital interaction. Participants who have authenticated on the website can utilize the chatbot to make changes and get assistance with certain transactions — similar to their experience with live agents over the phone or in chat. Notably, in the two months since additional AI capability was released, PAL has increased Voya’s self-service rate from 70% to 85%.

Building on the initial success, Voya remains focused on building out the experience with future enhancements to include:

  • CSA support: Using Voya PAL to gather information for customer service associates (CSAs) while the CSA is on a call with a customer, enabling CSAs to engage more with customers while Voya PAL does the work of navigating systems behind the scenes and presenting that information to the CSA to help them resolve a customer’s inquiry.
  • myVoyage: Using Voya PAL within Voya’s myVoyage, a first-of-its-kind personalized financial-guidance and connected workplace-benefits digital platform, to help customers navigate the process and answer their frequently asked questions.

Marketing Technology News: Horizon Media Selects PubMatic as Exclusive Partner to Provide Advertisers with Data-Driven Advertising at Scale

“Voya is committed to investing in the latest digital technologies as a way to distinguish our customer experience from all others,” said Santhosh Keshavan, EVP and chief information officer, Voya Financial. “We continually work to anticipate and identify opportunities to leverage digital platforms to provide experiences that reflect the specific needs of our customers. Voya PAL offers yet another alternative for individuals to seek answers to their questions and find resolution to their needs — and as more customers engage with our digital assistant, Voya PAL is using AI capabilities to consume more data, learn more scenarios and become even more in tune to customer needs.”

Voya PAL builds on the company’s continued focus and investments in digital solutions that help improve the financial outcomes of all individuals. Most recently, the firm introduced the launch of myVoyage, a one-stop solution providing individuals with a complete view of their financial picture, inclusive of workplace benefits and savings accounts along with the integration of external accounts such as personal banking and credit accounts to help better manage one’s health and financial well-being.

As an industry leader focused on the delivery of workplace benefits, savings, and investment solutions to and through the workplace, Voya is committed to delivering on its mission to make a secure financial future possible for all Americans — one person, one family, one institution at a time.

Marketing Technology News: MarTech Interview with Laura Goldberg, CMO at Constant Contact

[ad_2]

Source link

Why is it important to invest in artificial Intelligence? » FINCHANNEL

[ad_1]

Artificial intelligence is an area that is experiencing incredible growth. Although artificial intelligence as a concept has existed since the mid-1950s, it wasn’t until the late 1990s that AI was first utilized in business applications. Since then, it has become a thing of interest among professionals, and the trend is expected to continue. AI is seen as the next big thing in business, and many companies are searching for the best way to implement it. AI has the potential to make a great impact on your business. The ability for AI to conduct its own research on your behalf, find relevant information and organize it is a powerful tool that can help take your company’s efficiency to the next level. From a business perspective, AI can also crunch through mountains of data to find patterns and create valuable insights that can help businesses better target their customers.

Advantages of using AI consulting services

– Predictive analysis – Predictive analysis is the ability of AI to analyze historical data and predict what will happen in the future. Predictive analysis can be used to forecast e.g. customer behavior and can help businesses identify trends in their data and develop strategies to help them automate and manage their work more effectively.

– Collaborative decision-making – AI is also useful when it comes to collaborative decision-making. With the desire to rely less on human input in some cases, businesses are exploring new ways to organize and make decisions. AI can help businesses automate and conduct collaborative decision-making, which can reduce costs and provide benefits such as the ability to conduct more effective audits.

– Advanced data analytics – AI can also help businesses with advanced data analytics by plowing through extensive datasets to find patterns and create valuable insights to support businesses in better targeting of their customers.

– Other advantages of AI include the ability to work uninterrupted 24/7, reduce the risk of human error, and the potential to increase operational efficiency across the organization.

Disadvantages of using AI services

– Security concerns – AI has received a lot of attention over the last few years, and many experts are warning about security concerns. While AI can help significantly advance a variety of industries, it may also enable criminals to develop new techniques for hacking into companies and stealing data.

– Lack of standardization – AI is still fairly new, and it’s still evolving. Therefore, there aren’t many standards that businesses can use to ensure the quality of their AI solutions.

– The other disadvantage of AI is that it can’t necessarily perform tasks that a human can do better. As AI gets better at performing specific tasks, businesses might be left with only one option to choose from. In many areas, human input remains necessary.

– Other disadvantages of AI include higher costs, the need for vast amounts of data, and the risk of companies creating more complex problems than they can solve.

How to choose an AI service provider?

The choice of an AI service provider can have an enormous impact on the success of your project. Choosing the wrong partner can result in a lot of money being wasted, while choosing the wrong partner can be costly and time-consuming. Therefore, it’s important that you carefully choose your AI service provider to ensure that you get the most out of your investment. There are a few things that you should consider when choosing an AI service provider. Among these considerations are the company’s experience, how knowledgeable the team is, how strong their technological background is, how scalable the technology the use is, how much control you have over the project, and how experienced the engineers are.

Which Artificial Intelligence Service Should You Use?

This question is a bit tricky, because there are a lot of AI consulting services (https://neurosys.com/services/ai-consulting) out there, and most of them claim to be the best AI service provider in the market. Therefore, it’s important that you take the time to choose the right one. Deciding on the wrong one can be costly, time-consuming, and result in a lot of wasted money. There is no one easy answer to the question above, but with the information you already have, you know which qualities are vital for your project.

Conclusion

Artificial intelligence can significantly improve your business outcomes by automating and managing work more effectively, finding trends in data, and developing strategies to help you automate and manage your work more effectively. However, you need to choose the right AI service provider, and it’s important that you carefully choose your AI service provider to ensure that you get the most out of your investment.

[ad_2]

Source link

Artificial intelligence now pens a book

[ad_1]

The auction of the Portrait of Edmond de Belamy in 2018 made the world take notice of the creativity of artificial intelligence (AI), a trait believed to be possessed only by humans. The painting, with signature of the ‘painter’ (a part of the algorithm code) showcased AI’s creative prowess, amid looming fears of rise of the machines.

Now, a set of AI algorithms coded by Mumbai-based Fluid AI has penned a book, Bridging the AI Gap, which the company claims is the first one to be completely written by AI algorithms.

“AIs generally write small paragraphs for social media posts or a maximum of one-page-long blog posts. This is the first time they have written a full-fledged 102-page book,” Fluid AI co-founder and CEO Abhinav Aggarwal said. It took the ‘AIs’ just three days to write the entire book.

However, an overall six-month algorithm training was provided to the ‘AIs’, before it started writing the book. A total of 102,000 lines of codes — written by Fluid AI founders Abhinav and his brother Raghav — were used, while the AI was trained on billions of literature files.

The book is about how some companies are able to generate immense value through AI, while many others don’t. “Who better knows the answer than AI,” he said.

The book also explains the business uses of AI and how users can learn the technology, among others.

“Being an educational and a non-fiction book, there is no emotion in it, but there are passages that are conversational…” Fluid AI co-founder and MD Raghav Aggarwal said.

Fluid AI is planning to put the book to a Turing test on social media. The Turing test, originally called the imitation game by Alan Turing (considered as the father of modern computer science) in 1950, is the test of a machine’s ability to exhibit intelligent behaviour, which is equivalent or indistinguishable from that of a human.

Also read: Samsung Galaxy S23 series Unpacked event may happen in February 2023: Report

“If 50% of the participants can’t figure out whether the book was written by a human or a machine, we would consider it as a success,” Abhinav said.

Bridging the AI Gap is now available on Amazon and Kindle, priced at `350 (paperback) and at $8 (paperback) and $15 hardcover in the US.

Fluid AI, which had been working on AI-related projects since 2012, has three patents granted in the US for its AI technologies.

Fluid AI was founded by brothers Raghav and Abhinav Aggarwal, who were featured among the list of the Forbes 30 under 30 Asia list for 2017 and Fortune 40 under 40 India list for 2018.

[ad_2]

Source link