Generative Artificial Intelligence (AI)
Generative AI is a new buzzword that has evolved as a result of emerging applications such as DeepFake. Generative AI uses AI and machine learning techniques to enable machines to generate artificial information such as text, images, audio, and video content based on training data, in such a way that the user believes the information is real.
What is Generative AI?
Generative AI is the technology that uses existing text, audio files, or images to produce new content. With generative AI, computers recognise the underlying pattern in the input and generate similar information.
There are various techniques in Generative AI such as:
1. Generative adversarial networks (GANs):
GANs are two neural networks: a generator and a discriminator that pit against each other to find equilibrium between the two networks:
- The generator network is responsible for generating new data or content resembling the source data.
- The discriminator network is in charge of differentiating between the source and the generated data in order to recognize what is closer to the original data.
2. Transformers:
- Transformers, such as GPT-3, LaMDA, and Wu-Dao, imitate cognitive attention and differentially measure the significance of the input data parts.
- They are trained to understand the language or image, learn some classification tasks and generate texts or images from massive datasets.
3. Variational auto-encoders:
- The encoder encodes the input into compressed code while the decoder reproduces the initial information from this code.
- If chosen and trained correctly, this compressed representation stores the input data distribution in a much smaller dimensional representation.
Several applications of Generative AI are,
Image-to-image Conversion: It translates an image to another.
- black and white photographs to color,
- day photos to night photos,
- a photo to an artistic painting or
- satellite photos to google maps views.
Text-to-image Translation: It produces realistic photographs from textual descriptions of simple objects like birds and flowers.
Image Processing: This technique may be used to improve image processing skills, such as converting low-resolution images to high-resolution images.
Semantic-Image-to-Photo Translation: It translates semantic images or sketches into highly realistic pictures.
Benefits of Generative AI
Identity Protection: Generative AI avatars provide protection for people who do not want to disclose their identities while interviewing or working.
Fraud Detection: Automating fraud detection processes has helped identify illegal and suspicious activities. AI is detecting illicit transactions using predefined algorithms and rules.
Sentiment Analysis: Machine Learning is using text, image, and voice analysis to comprehend customer sentiment. AI algorithms study web activity and user data to decipher customer opinion towards your products and services.
Healthcare: Generative AI can be employed for rendering prosthetic limbs, organic molecules, and other items from scratch when actuated through 3D printing, CRISPR, and other technologies. It can also enable early identification of potential malignancy to more effective treatment plans. IBM is currently using this technology to research antimicrobial peptides (AMP) to find drugs for COVID-19.
Generative AI in research perspective
Improving cybersecurity
Instances of cyber threats have increased in the last few years. Organizations are adopting advanced security measures to prevent sensitive information from being leaked and misused. Yet, hackers are coming up with new methods to obtain and exploit user data. Criminal activities like blackmailing users to keep their information private, publicly posting data to humiliate people, or tarnishing their images using fake images and videos are on the rise and are a grave concern.
Generative adversarial networks can be trained to identify such instances of fraud. They can be used to make deep learning models more robust. The neural network can be trained to identify any malicious information that might be added to images by hackers. Researchers and analysts create fake examples on purpose and use them to train the neural network. The network improves upon itself as it analyzes multiple images.
Generative AI for Software Engineering
One of the strengths of generative algorithms and applications is their ability to create instances from a ”learned” class of examples, including projects that involve images, videos, music, molecules, texts of many types, and diverse other media and categories. When analyzed as sequences of tokens, these ”learned” patterns can function as predictions of (e.g.) the next word in a text or a software program.
In the context of the proliferation of hyper-realistic fake content for malicious purposes, generative artificial intelligence (AI) technology also promises significant progress, particularly in the medical field.
“According to the consulting firm Gartner, more than 30% of new drugs and materials will be discovered using generative AI techniques by 2025.”
Synthetic brain MRI
Medicine is one of those areas where data is not widely available, due to its rarity – medical images with abnormal findings are by definition infrequent – and the legal restrictions on the use and sharing of patient records.
In 2018, in the United States, researchers from Nvidia, the Mayo Clinic and the MGH & BWH Center for Clinical Data Science developed a model capable of producing synthetic brain MRIs showing tumours, which can be used to train a deep learning model. The research team believes that these synthetic images are both a complementary tool for data augmentation and an effective method of anonymization. They provide a low-cost source of diverse data, which has improved the performance of tumour segmentation (the process of distinguishing tumour tissue from normal brain tissue on an MRI scan) while allowing data sharing between different institutions.
Accelerated drug development
Pharmacology could also benefit from this approach. Designing a new drug is difficult, expensive and time-consuming: it typically takes more than twelve years and an average of one billion euros for a market launch. One of the reasons the cost is so high is that the synthesis of thousands of molecules is necessary before a pre-clinical study is started, in order to identify one candidate. This process requires the use of multi-objective optimisation methods to explore a vast “chemical space” (a virtually infinite expanse containing all possible molecules and chemical compounds), as the AI system must evaluate and make decisions related to several key criteria such as the drug’s activity, its toxicity or the ease with which it can be synthesized. The optimisation methods in question require a large amount of training data, which can in part be provided by generative models.
Generative AI in detecting blindness
As diabetics know, the health condition has several adverse long-term effects, with one of them being retinopathy. Diabetic retinopathy involves progressive eclipsing of a patient’s retina to the point of vision impairment. Generative AI-based applications can evaluate millions of images of retinopathy-infected patients before generating brand-new datasets to cover every scenario. These datasets will also include how retinopathy looks in patients at an early stage. Once this is achieved, ophthalmologists can take preventive measures to eliminate or at least mitigate diabetic retinopathy in patients.
COVID-19 antiviral design
Due to the novel nature of COVID-19, there exists very limited binding affinity data between SARS-CoV-2 target proteins and small, drug-like molecules, making it challenging to generate drug molecules with high affinity to novel SARS-CoV-2 proteins. Additionally, accounting for high target selectivity becomes crucial for optimal drug generation in order to avoid potential undesired toxic and adverse effects arising from off-target activities, which could lead to failure in the later stages of discovery.The proposed generative framework tackles challenges by learning the protein-ligand binding relationships on the pre-trained latent features of protein sequences and small drug-like molecules, which were obtained using large corpuses of unlabeled data.This open sharing of the AI-generated artifacts in the explorer is the first step taken toward establishing a community to aid in finding optimal designs in the most efficient manner possible.
Conclusion
Generative AI is set to disrupt more industries than we can imagine. It is finding applications in crucial fields such as healthcare and defense. As the technology evolves, it will find more advanced applications. As different industries adopt this technology, we are likely to see a considerably greater range of use cases emerge in the future years.
Sources
- http://ceur-ws.org/Vol-3124/paper11.pdf
- https://hellofuture.orange.com/en/generative-ai-a-new-approach-to-overcome-data-scarcity/
- https://www.forbes.com/sites/naveenjoshi/2022/03/23/exploring-the-plethora-of-use-cases-of-generative-ai-in-various-sectors/?sh=1b2e78ba1ff4
- https://www.ibm.com/blogs/research/2020/06/accelerated-discovery/