-->

How to avoid buying biased AI-based marketing tools and the application of artificial intelligence in risk management | Technology

How to avoid buying biased AI-based marketing tools and The application of artificial intelligence in risk management | Technology
How to avoid buying biased AI-based marketing tools and The application of artificial intelligence in risk management?

How to avoid buying marketing solutions that are based on artificial intelligence that are biassed?

A marketer's relationship with an AI resource can take many forms, including the building blocks of AI internally or externally. On the other hand, marketers often use fully off-the-shelf AI from retailers. For example, a marketer might target an audience pre-built on the DSP (Demand Side Platform) which might be the result of a similar model based on visitor datasets derived from the vendor.

On the other hand, marketers can use an external technology platform or “BYOA” (“Bring Your Own Algorithm,” a growing trend) to use their training datasets, conduct their training and testing, and manage the process. to DSP. There are many ways to do this, such as providing first-party data from marketers to create custom forms.

Below is a list of questions for marketers who are looking for ready-made, mature AI-powered products. This is largely because these positions are more likely to be presented to marketers as a black box and therefore have the greatest risk of uncertainty and potential unidentified bias. Black boxes are also difficult to separate, and it is very difficult to compare suppliers.

But as you can see, all of these issues are relevant to any AI-based product, no matter where it is made. So if it's part of the process of building AI internally, it's also important to ask these same questions internally as part of that process.

READ MORE: Using Artificial Intelligence, A Sensor Detects Cancer | Technology

How do you know that your training information is correct?  

When it comes to AI, trash inside, trash outside. Having great training data doesn't necessarily mean having great AI. However, poor training data guarantees a bad AI.

There are several reasons why some data may be harmful to training, but the most obvious one is whether it is wrong. Most marketers don't realize how wrong they are with the data sets they rely on. In fact, the Advertising Research Foundation (ARF) just published a rare survey of the accuracy of demographic data across the industry, and the results are astounding. For the industry as a whole, data for “home with children” is incorrect by 60%, data for “individual” marital status is incorrect for 76%, and data for “small business ownership” is incorrect by 83%. Flour! Obviously, these models do not predict the outcome of these consumer models; Rather, they are errors in the data set that can be used to train the model!

Inappropriate training can confuse the data algorithm development process. For example, suppose an algorithm optimizes the dynamic creative elements of a travel campaign based on geographic location. If the training data is based on incorrect location data (location data is too general), for example, customers in the southwestern United States might respond to an advertisement about driving for a Florida beach vacation, or the Ozarks' response to a consumer fishing trip in Seattle. This will lead to a very chaotic model of reality, which leads to a suboptimal algorithm.

Do not assume that your data is accurate. Consider the source, compare it to other sources, check for compatibility and check the basic truth set as much as possible.

READ MORE: Is Artificial Intelligence the biggest threat to Sports betting in the future | Technology and Sports 

How do you know if your training data is comprehensive and diverse?

Good training data should also be comprehensive, which means you need lots of examples to illustrate all the possible situations and outcomes you're trying to drive. The more comprehensive you are, the more confident you are in the patterns you will find.

This is especially important for AI models designed to improve rare outcomes. Freemium is a good example of a mobile game download campaign. These types of games often rely on a small group of "whales" who do a lot of in-game shopping, while others don't shop much or not at all. To train an algorithm to find whales, it is important to ensure that the dataset contains examples of whale consumer travel, so that the model can understand the identity of the whale. The training datasets are bound to be biased towards non-whales because they are more common.

Another complementary angle is diversity. For example, if you are using artificial intelligence to market a new product, your training data may consist primarily of early recipients who may have some bias regarding HHI (family income), life cycle, age, and other factors. When you are trying to integrate your product when your product "overshoots" to target more mainstream consumer audiences, it is important to ensure that you have a diverse training data set that includes not only the first recipients but also later subscribers. actor.

READ MORE: Why Artificial Intelligence is Critical in the Race to SDG Achievement | Technology

What was tested?

Many companies in AI tests focus on the overall success of the algorithm, such as accuracy or precision. Of course, this is important. But when it comes to bias, testing can't stop there. A good way to test for bias is to document specific subsets that are important to the main use of the algorithm. For example, if an algorithm is set up to improve conversions, we want to run separate tests for large items versus small items, new customers versus existing customers, or different types of designs. Once we have this list of subsets, we need to track the same set of algorithm success metrics for each individual subset to see where the algorithm overall performs significantly lower.

A recent AI Biasing report from IAB (Interactive Advertising Bureau) provides a complete infographic to guide marketers through the decision tree process of this subgroup testing method.

READ MORE: How Artificial Intelligence saved Global Commerce | Technology

Can we do our own tests?

If marketers use seller tools, it is recommended that you not only trust seller tests, but also conduct your own, using some of the main subgroups that are especially important for your business.

A key to tracking algorithm performance across subgroups. It is unlikely that they will perform the same. If not, can you tolerate different levels of performance? Should algorithms be used for subsets or only specific uses?

READ MORE: Fintech Will Be Affected in 5 Ways by Artificial Intelligence | Technology

Did you test for bias on both sides?

When I consider the potential impact of AI bias, I think the algorithm is vulnerable in both input and output.

In terms of input, imagine using a conversion optimization algorithm for high consideration and low consideration products.

Algorithms can be more successful in optimizing low-interest products, as all consumer decisions are made online and so there is a more direct way to buy.

For high-end products, consumers can search offline, go to stores, and talk to friends, so there are far fewer direct digital paths to purchase, so algorithms may be less accurate for this type of activity.

In terms of output, imagine a conversion-optimized mobile commerce campaign. AI engines can generate more training data for short-tailed apps like ESPN or Friends with Friends than for long-tailed apps. So the algorithm can drive a drive towards more short-tailed inventory because it has better data in those applications and is therefore better able to find performance patterns. Over time, the marketer may find that his campaign adds to the list of costly short-tail inventory and you may lose very effective long-term inventory.

The above list of questions can help you advance or adjust AI work to reduce bias. In a world more diverse than ever, your AI solutions will certainly reflect that. Incomplete training data or insufficient testing will result in suboptimal performance, and it is important to remember that testing for bias must be repeated systematically whenever an algorithm is used.

READ MORE: Artificial Intelligence helps decipher the sounds of the animal kingdom | Technology

The application of artificial intelligence in risk management

Artificial intelligence (AI) is here to stay. Every day, companies discover more activities that can be improved to increase the efficiency and effectiveness of this cutting-edge technology. Between marketing, customer service and even security, the power of information management and artificial intelligence raises the bar for businesses in a competitive environment.

AI tools have their own set of management and operational advantages and risks. That's why companies must carefully evaluate the use of AI in their operations and understand the risks and rewards they receive from using the technology.

Society still doesn't agree on exactly what we mean by "artificial intelligence", so we can't point to a single event. In general, we can say that artificial intelligence adapts to the needs of users by analyzing the types of usage in the sea of ​​information between different data sources or general guidelines.

Platforms like Trelo have started incorporating machine learning-based AI capabilities to predict users’ repetitive activities. Other applications have adapted their business models based on AI techniques, such as QuillBot, a refreshing service powered by natural language processing.

Enterprise risk management can use these techniques to streamline business processes and use resources efficiently.

READ MORE: The Future of Healthy Eating: The artificial intelligence nutritionist is here | Health and Technology

Will AI change the game in risk management?

More precisely, AI will change the game over time - but it may still be inaccurate. Artificial intelligence is changing the game at the same time.

For example, banks and financial technology (fintech) companies are implementing risk management systems with artificial intelligence solutions to simplify the decision-making process, reduce credit risk, and provide financial services to their users through automation and machine learning algorithms. AI’s ability to analyze massive amounts of data has greatly improved the identification of relevant data for cybersecurity risk management, risk assessment, and robust business decisions.

READ MORE: How organizations are utilising artificial intelligence to monitor the risk of sanctions in the Black Sea and beyond | Technology

Some specific uses that benefit from artificial intelligence integrated with risk management systems 

Threat Analysis and Management

Machine learning engines can analyze large amounts of data from a variety of sources. This information creates real-time predictive models that enable risk managers and security teams to quickly respond to risks. Models are critical for developing early warning systems to ensure continuity of an organization's work and protect its stakeholders.

Reduce risk

AI also provides the ability to assess unstructured data on risky behaviors or activities in an organization's operations. AI algorithms can identify patterns of behavior related to past events and transmit them as risk predictors.

Fraud Detection

Fraud detection has traditionally required a rigorous analytical process by financial institutions and insurance companies. By using machine learning models focused on text mining, social media analysis, and database searches, AI systems can significantly reduce the workload of these processes and reduce the risk of fraud.

Data Classification

AI tools can process and classify all available data according to predefined patterns and categories, and monitor access to these datasets.

READ MORE: Artificial Intelligence and the Origin of Digital Twins | Technology

How to use artificial intelligence in risk management programs?

Unfortunately, these benefits are not without risks. When implementing AI technologies, businesses should pay particular attention to the challenges they face, such as the data collected and used, including the cost of implementation.

The following approaches can be used to implement AI models within your company to reduce "AI risk" and take advantage of the benefits these tools can bring to your organization:

Ideation

The first step in implementing an AI-driven risk management system is to identify an organization’s regulatory and reputational risks. Conduct a risk assessment based on your company's current structure and regulatory standards. Use it to decide what data you need to collect and what you want to do with this information.

Data Source

Based on previous risk assessments, it is possible to determine which datasets are suitable for processing AI models and which are not. So think carefully about what data to use and where to get this information. Even at the operational level, choosing the right dataset can affect the quality of results, so identifying data sources becomes an important step in implementing an ecosystem.

Model Development

Once you have useful data, create a useful table. Keep in mind the level of transparency you want your AI to operate, as some AI tools are not recommended for high-risk activities. Review any regulatory restrictions on how AI can be used in specific business processes and how AI can advance your organization's business goals.

Monitoring

As with other risk management tools, the use of AI must be continually assessed and modified. It is important to consider the changing needs of the organization and the potential pitfalls of the technology.

Sources: Jake Moskowitz, Emodo Institute, Venture Beat, Direct News 99, Reciprocity, Googly Market