AI EXPRESS - Hot Deal 4 VCs instabooks.co
  • AI
    Harnessing the power of GPT-3 in scientific research

    Harnessing the power of GPT-3 in scientific research

    How Tymely combines NLP and a human-in-the-loop approach to improve chatbot conversations

    ChatGPT and LLM-based chatbots set to improve customer experience

    Light Field Lab raises $50M to manufacture its SolidLight holographic displays

    Light Field Lab raises $50M to manufacture its SolidLight holographic displays

    Google 'Live in Paris' event offers muted response to Microsoft's 'race' in search

    Google ‘Live in Paris’ event offers muted response to Microsoft’s ‘race’ in search

    The 'race starts today' in search as Microsoft reveals new OpenAI-powered Bing, 'copilot for the web'

    The ‘race starts today’ in search as Microsoft reveals new OpenAI-powered Bing, ‘copilot for the web’

    You can't find state-of-the-art suppliers alone

    You can’t find state-of-the-art suppliers alone

  • ML
    Optimize your machine learning deployments with auto scaling on Amazon SageMaker

    Optimize your machine learning deployments with auto scaling on Amazon SageMaker

    Amazon SageMaker Automatic Model Tuning now supports three new completion criteria for hyperparameter optimization

    Amazon SageMaker Automatic Model Tuning now supports three new completion criteria for hyperparameter optimization

    first sample notebook

    Share medical image research on Amazon SageMaker Studio Lab for free

    Image classification model selection using Amazon SageMaker JumpStart

    Image classification model selection using Amazon SageMaker JumpStart

    Create powerful self-service experiences with Amazon Lex on Talkdesk CX Cloud contact center

    Create powerful self-service experiences with Amazon Lex on Talkdesk CX Cloud contact center

    Analyze and visualize multi-camera events using Amazon SageMaker Studio Lab

    Analyze and visualize multi-camera events using Amazon SageMaker Studio Lab

    Predict football punt and kickoff return yards with fat-tailed distribution using GluonTS

    Predict football punt and kickoff return yards with fat-tailed distribution using GluonTS

    Scaling distributed training with AWS Trainium and Amazon EKS

    Scaling distributed training with AWS Trainium and Amazon EKS

    How to decide between Amazon Rekognition image and video API for video moderation

    How to decide between Amazon Rekognition image and video API for video moderation

  • NLP
    Presight AI and G42 Healthcare sign an MOU

    Presight AI and G42 Healthcare sign an MOU

    Meet Sketch: An AI code Writing Assistant For Pandas

    Meet Sketch: An AI code Writing Assistant For Pandas

    Exploring The Dark Side Of OpenAI's GPT Chatbot

    Exploring The Dark Side Of OpenAI’s GPT Chatbot

    OpenAI launches tool to catch AI-generated text

    OpenAI launches tool to catch AI-generated text

    Year end report, 1 May 2021- 30 April 2022.

    U.S. Consumer Spending Starts to Sputter; Labor Report to Give Fed Look at Whether Rate Increases Are Cooling Rapid Wage Growth

    Meet ETCIO SEA Transformative CIOs 2022 Winner Edmund Situmorang, CIOSEA News, ETCIO SEA

    Meet ETCIO SEA Transformative CIOs 2022 Winner Edmund Situmorang, CIOSEA News, ETCIO SEA

    His Highness Sheikh Theyab bin Zayed Al Nahyan witnesses MBZUAI inaugural commencement

    His Highness Sheikh Theyab bin Zayed Al Nahyan witnesses MBZUAI inaugural commencement

    Hyperscale Revolution

    Companies that are leading the way

    ChatGPT and I wrote this article

    ChatGPT and I wrote this article

  • Vision
    Analyzing the Power of CLIP for Image Representation in Computer Vision

    Analyzing the Power of CLIP for Image Representation in Computer Vision

    What is a Computer Vision Platform? Complete Guide in 2023

    What is a Computer Vision Platform? Complete Guide in 2023

    Training YOLOv8 on Custom Data

    Training YOLOv8 on Custom Data

    The Best Applications of Computer Vision in Agriculture (2022)

    The Best Applications of Computer Vision in Agriculture (2022)

    A Review of the Image Quality Metrics used in Image Generative Models

    A Review of the Image Quality Metrics used in Image Generative Models

    CoaXPress Frame Grabbers for Machine Vision

    CoaXPress Frame Grabbers for Machine Vision

    Translation Invariance & Equivariance in Convolutional Neural Networks

    Translation Invariance & Equivariance in Convolutional Neural Networks

    Roll Model: Smart Stroller Pushes Its Way to the Top at CES 2023

    Roll Model: Smart Stroller Pushes Its Way to the Top at CES 2023

    Image Annotation: Best Software Tools and Solutions in 2023

    Image Annotation: Best Software Tools and Solutions in 2023

  • Robotics
    A red industrial robot arm sitting on a mobile black box base on against a black background.

    Rapid Robotics to offer Yaskawa industrial robots

    A silver SCARA robot.

    Yamaha Motor announces robotics business in Singapore

    A white drone flying out of a black and grey box labeled "Airobotics" against a black and white sky.

    Airobotics receives $3.5M purchase order from SkyGo

    From left to right, a white platform on wheels with three robotic arms, a monitor on a white stand and another white and black stand.

    J&J’s Ethicon completes first robot-assisted kidney stone removal with Monarch platform

    a male model wear the shoulder harness with right arm outstretched.

    Soft robotic wearable restores arm function for people with ALS

    Meet the Robotics Summit & Expo keynote speakers

    Meet the Robotics Summit & Expo keynote speakers

    ABB uses robots to automate COVID antibody testing

    ABB uses robots to automate COVID antibody testing

    A silver and black hollow shaft gear unit from Harmonic Drive.

    Harmonic Drive launches HPF series of hollow shaft gear units

    A UR cobot performs a place operation.

    Rapid Robotics and Universal Robots team up to accelerate cobot deployments

  • RPA
    Avoid Patient Queues with Automated Query Resolution

    Avoid Patient Queues with Automated Query Resolution

    RPA in Banking & Finance 2023 (Use Cases, Benefits, Challenges, Trends)

    RPA in Banking & Finance 2023 (Use Cases, Benefits, Challenges, Trends)

    Future of Electronic Visit Verification (EVV) for Homecare

    Future of Electronic Visit Verification (EVV) for Homecare

    Benefits of Implementing RPA in Banking Industry

    Benefits of Implementing RPA in Banking Industry

    Robotic Process Automation

    What is RPA (Robotic Process Automation)?

    Top RPA Use Cases in Banking Industry in 2023

    Top RPA Use Cases in Banking Industry in 2023

    Accelerate Account Opening Process Using KYC Automation

    Accelerate Account Opening Process Using KYC Automation

    RPA Case Study in Banking

    RPA Case Study in Banking

    Reducing Service Ticket Volumes through Automated Password Reset Process

    Reducing Service Tickets Volume Using Password Reset Automation

  • Gaming
    God of War Ragnarok had a banner debut week at UK retail

    God of War Ragnarok had a banner debut week at UK retail

    A Little To The Left Review (Switch eShop)

    A Little To The Left Review (Switch eShop)

    Horizon Call of the Mountain will release alongside PlayStation VR2 in February

    Horizon Call of the Mountain will release alongside PlayStation VR2 in February

    Sonic Frontiers has Dreamcast-era jank and pop-in galore - but I can't stop playing it

    Sonic Frontiers has Dreamcast-era jank and pop-in galore – but I can’t stop playing it

    Incredible November Xbox Game Pass addition makes all other games obsolete

    Incredible November Xbox Game Pass addition makes all other games obsolete

    Free Monster Hunter DLC For Sonic Frontiers Now Available On Switch

    Free Monster Hunter DLC For Sonic Frontiers Now Available On Switch

    Somerville review: the most beautiful game I’ve ever played

    Somerville review: the most beautiful game I’ve ever played

    Microsoft Flight Sim boss confirms more crossover content like Halo's Pelican and Top Gun Maverick

    Microsoft Flight Sim boss confirms more crossover content like Halo’s Pelican and Top Gun Maverick

    The Game Awards nominations are in, with God of War Ragnarok up for 10 of them

    The Game Awards nominations are in, with God of War Ragnarok up for 10 of them

  • Investment
    CFEX

    CFEX Closes Seed Funding – FinSMEs

    181 travel

    181travel Raises €2.5M in Funding

    HourWork Raises $10M in Series A Funding

    Amai Group Acquires Career Sidekick

    Thorne Helthtech

    Thorne Healthtech Acquires Precon Health, for USD5M

    Partech Africa fund

    Partech Africa II Reaches 1st Close, at €245M   

    Mazepay

    Mazepay Raises €4M in Growth Funding

    uniifi

    Uniify RaiseS €3M in Seed Funding

    Uniphore

    Uniphore Acquires Hexagone

    Avicenna

    Avicenna.AI Raises $10M Series A Funding

  • More
    • Data analytics
    • Apps
    • No Code
    • Cloud
    • Quantum Computing
    • Security
    • AR & VR
    • Esports
    • IOT
    • Smart Home
    • Smart City
    • Crypto Currency
    • Blockchain
    • Reviews
    • Video
No Result
View All Result
AI EXPRESS - Hot Deal 4 VCs instabooks.co
No Result
View All Result
Home AI

Multimodal models are fast becoming a reality — consequences be damned

seprameen by seprameen
December 21, 2021
in AI
0
Multimodal models are fast becoming a reality -- consequences be damned
0
SHARES
2
VIEWS
Share on FacebookShare on Twitter

Hear from CIOs, CTOs, and different C-level and senior execs on knowledge and AI methods on the Way forward for Work Summit this January 12, 2022. Study extra


Roughly a 12 months in the past, VentureBeat wrote about progress within the AI and machine studying subject towards growing multimodal fashions, or fashions that may perceive the that means of textual content, movies, audio, and pictures collectively in context. Again then, the work was in its infancy and confronted formidable challenges, not least of which involved biases amplified in coaching datasets. However breakthroughs have been made.

This 12 months, OpenAI launched DALL-E and CLIP, two multimodal fashions that the analysis labs claims are a “a step towards methods with [a] deeper understanding of the world.” DALL-E, impressed by the surrealist artist Salvador Dalí, was educated to generate pictures from easy textual content descriptions. Equally, CLIP (for “Contrastive Language-Picture Pre-training”) was educated to affiliate visible ideas with language, drawing on instance pictures paired with captions scraped from the general public net.

DALL-E and CLIP are solely the tip of the iceberg. A number of research have demonstrated {that a} single mannequin might be educated to be taught the relationships between audio, textual content, pictures, and different types of knowledge. Some hurdles have but to be overcome, like mannequin bias. However already, multimodal fashions have been utilized to real-world functions together with hate speech detection.

Promising new instructions

People perceive occasions on the earth contextually, performing multimodal reasoning throughout time to make inferences in regards to the previous, current, and future. For instance, given textual content and a picture that appears innocuous when thought-about individually — e.g., “Look how many individuals love you” and an image of a barren desert — folks acknowledge that these components tackle doubtlessly hurtful connotations after they’re paired or juxtaposed.

Merlot AI

Above: Merlot can perceive the sequence of occasions in movies, as demonstrated right here.

Even the most effective AI methods battle on this space. However these just like the Allen Institute for Synthetic Intelligence’s and the College of Washington’s Multimodal Neural Script Information Fashions (Merlot) present how far the literature has come. Merlot, which was detailed in a paper revealed earlier within the 12 months, learns to match pictures in movies with phrases and observe occasions over time by watching tens of millions of transcribed YouTube movies. It does all this in an unsupervised method, that means the movies don’t must be labeled or categorized — the system learns from the movies’ inherent constructions.

“We hope that Merlot can encourage future work for studying imaginative and prescient plus language representations in a extra human-like trend in comparison with studying from literal captions and their corresponding pictures,” the coauthors wrote in a paper revealed final summer season. “The mannequin achieves robust efficiency on duties requiring event-level reasoning over movies and static pictures.”

On this similar vein, Google in June launched MUM, a multimodal mannequin educated on a dataset of paperwork from the online that may switch data between languages. MUM, which doesn’t must be explicitly taught the best way to full duties, is ready to reply questions in 75 languages, together with “I need to hike to Mount Fuji subsequent fall — what ought to I do to organize?” whereas realizing that “put together” may embody issues like health in addition to climate.

A newer challenge from Google, Video-Audio-Textual content Transformer (VATT), is an try to construct a extremely succesful multimodal mannequin by coaching throughout datasets containing video transcripts, movies, audio, and pictures. VATT could make predictions for a number of modalities and datasets from uncooked alerts, not solely efficiently captioning occasions in movies however pulling up movies given a immediate, categorizing audio clips, and recognizing objects in pictures.

“We needed to look at if there exists one mannequin that may be taught semantic representations of various modalities and datasets without delay (from uncooked multimodal alerts),” Hassan Akbari, a analysis scientist at Google who codeveloped VATT, instructed VentureBeat by way of e-mail. “At first, we didn’t anticipate it to even converge, as a result of we have been forcing one mannequin to course of completely different uncooked alerts from completely different modalities. We noticed that not solely is it attainable to coach one mannequin to do this, however its inside activations present fascinating patterns. For instance, some layers of the mannequin specialize [in] a particular modality whereas skipping different modalities. Closing layers of the mannequin deal with all modalities (semantically) the identical and understand them virtually equally.”

For his or her half, researchers at Meta, previously Fb, claim to have created a multimodal mannequin that achieves “spectacular efficiency” on 35 completely different imaginative and prescient, language, and crossmodal and multimodal imaginative and prescient and language duties. Referred to as FLAVA, the creators be aware that it was educated on a group of brazenly out there datasets roughly six instances smaller — tens of tens of millions of text-image pairs — than the datasets used to coach CLIP, demonstrating its effectivity.

“Our work factors the way in which ahead in the direction of generalized however open fashions that carry out nicely on all kinds of multimodal duties” together with picture recognition and caption era, the authors wrote within the tutorial paper introducing FLAVA. “Combining info from completely different modalities into one common structure holds promise not solely as a result of it’s just like how people make sense of the world, but additionally as a result of it could result in higher pattern effectivity and far richer representations.”

See also  How Augmented Reality Transforms Fitness Apps?

To not be outdone, a workforce of Microsoft Analysis Asia and Peking College researchers have developed NUWA, a mannequin that they declare can generate new or edit present pictures and movies for numerous media creation duties. Educated on textual content, video, and picture datasets, the researchers declare that NUWA can be taught to spit out pictures or movies given a sketch or textual content immediate (e.g., “A canine with goggles is staring on the digicam”), predict the subsequent scene in a video from a number of frames of footage, or routinely fill within the blanks in a picture that’s partially obscured.

Microsoft NUWA

Above: NUWA can generate movies given a textual content immediate.

Picture Credit score: Microsoft

“[Previous techniques] deal with pictures and movies individually and deal with producing both of them. This limits the fashions to learn from each picture and video knowledge,” the researchers wrote in a paper. “NUWA exhibits surprisingly good zero-shot capabilities not solely on text-guided picture manipulation, but additionally text-guided video manipulation.”

The issue of bias

Multimodal fashions, like different varieties of fashions, are inclined to bias, which regularly arises from the datasets used to coach the fashions.

In a study out of the College of Southern California and Carnegie Mellon, researchers discovered that one open supply multimodal mannequin, VL-BERT, tends to stereotypically affiliate sure varieties of attire, like aprons, with girls. OpenAI has explored the presence of biases in multimodal neurons, the elements that make up multimodal fashions, together with a “terrorism/Islam” neuron that responds to photographs of phrases like “assault” and “horror” but additionally “Allah” and “Muslim.”

CLIP reveals biases, as nicely, at instances horrifyingly misclassifying pictures of Black folks as “non-human” and youngsters as “criminals” and “thieves.” In line with OpenAI, the mannequin can be prejudicial towards sure genders, associating phrases having to do with look (e.g., “brown hair,” “blonde”) and occupations like “nanny” with photos of girls.

Like CLIP, the Allen Institute and College of Washington researchers be aware that Merlot can exhibit undesirable biases as a result of it was solely educated on English knowledge and largely native information segments, which might spend quite a lot of time protecting crime tales in a sensationalized way. Studies have demonstrated a correlation between watching the native information and having extra specific, racialized beliefs about crime. It’s “very doubtless” that coaching fashions like Merlot on largely information content material may trigger them to be taught sexist patterns in addition to racist patterns, the researchers concede, provided that the most well-liked YouTubers in most international locations are men.

In lieu of a technical answer, OpenAI recommends “neighborhood exploration” to higher perceive fashions like CLIP and develop evaluations to evaluate their capabilities — and potential for misuse (e.g., producing disinformation). This, they are saying, may assist improve the probability multimodal fashions are used beneficially whereas shedding mild on the efficiency hole between fashions.

Actual-world functions

Whereas some work stays firmly within the analysis phases, corporations together with Google and Fb are actively commercializing multimodal fashions to enhance their services and products.

For instance, Google says it’ll use MUM to energy a brand new characteristic in Google Lens, the corporate’s picture recognition know-how, that finds objects like attire primarily based on pictures and high-level descriptions. Google additionally claims that MUM helped its engineers to establish greater than 800 COVID-19 identify variations in over 50 languages.

Sooner or later, Google’s VP of Search Pandu Nayak says, MUM may join customers to companies by surfacing merchandise and critiques and enhancing “all types” of language understanding — whether or not on the customer support stage or in a analysis setting. “MUM can perceive that what you’re on the lookout for are strategies for fixing and what that mechanism is,” he instructed VentureBeat in a earlier interview. “The ability of MUM is its capacity to grasp info on a broad stage … That is the type of factor that the multimodal [models] promise.”

Meta, in the meantime, stories that it’s utilizing multimodal fashions to acknowledge whether or not memes violate its phrases of service. The corporate just lately constructed and deployed a system, Few-Shot Learner (FSL), that may adapt to take motion on evolving varieties of doubtlessly dangerous content material in upwards of 100 languages. Meta claims that, on Fb, FSL has helped to establish content material that shares deceptive info in a manner that might discourage COVID-19 vaccinations or that comes near inciting violence.

Future multimodal fashions might need even farther-reaching implications.

Researchers at UCLA, the College of Southern California, Intuit, and the Chan Zuckerberg Initiative have launched a dataset known as Multimodal Biomedical Experiment Method Classification (Melinda) designed to see whether or not present multimodal fashions can curate organic research in addition to human reviewers. Curating research is a crucial — but labor-intensive — course of carried out by researchers in life sciences that requires recognizing experiment strategies to establish the underlying protocols that web the figures revealed in analysis articles.

Even the most effective multimodal fashions out there struggled on Melinda. However the researchers are hopeful that the benchmark motivates further work on this space. “The Melinda dataset may function an excellent testbed for benchmarking [because] the popularity [task] is basically multimodal [and challenging], the place justification of the experiment strategies takes each figures and captions into consideration,” they wrote in a paper.

OpenAI DALL-E

Above: OpenAI’s DALL-E.

Picture Credit score: OpenAI

As for DALL-E, OpenAI predicts that it would sometime increase — and even exchange — 3D rendering engines. For instance, architects may use the instrument to visualise buildings, whereas graphic artists may apply it to software program and online game design. In one other level in DALL-E’s favor, the instrument may mix disparate concepts to synthesize objects, a few of that are unlikely to exist in the actual world — like a hybrid of a snail and a harp.

See also  The beautiful intersection of simulation and AI

Aditya Ramesh, a researcher engaged on the DALL-E workforce, instructed VentureBeat in an interview that OpenAI has been focusing for the previous few months on enhancing the mannequin’s core capabilities. The workforce is at present investigating methods to realize larger picture resolutions and photorealism, in addition to ways in which the subsequent era of DALL-E — which Ramesh known as “DALL-E v2” — may very well be used to edit pictures and generate pictures extra shortly.

“A variety of our effort has gone towards making these fashions deployable in follow and [the] kind of issues we have to work on to make that attainable,” Ramesh stated. “We need to make it possible for, if sooner or later these fashions are made out there to a big viewers, we achieve this in a manner that’s protected.”

Far-reaching penalties

“DALL-E exhibits creativity, producing helpful conceptual pictures for product, trend, and inside design,” Gary Grossman, international lead at Edelman’s AI Middle of Excellence, wrote in a current opinion article. “DALL-E may assist artistic brainstorming … both with thought starters or, someday, producing closing conceptual pictures. Time will inform whether or not this can exchange folks performing these duties or just be one other instrument to spice up effectivity and creativity.”

It’s early days, however Grossman’s final level — that multimodal fashions would possibly exchange, relatively than increase, people — is more likely to develop into more and more related because the know-how grows extra refined. (By 2022, an estimated 5 million jobs worldwide shall be misplaced to automation applied sciences, with 47% of U.S. jobs liable to being automated.) One other, associated query unaddressed is how organizations with fewer sources will have the ability to leverage multimodal fashions, given the fashions’ comparatively excessive growth prices.

One other unaddressed query is the best way to stop multimodal fashions from being abused by malicious actors, from governments and criminals to cyberbullies. In a paper revealed by Stanford’s Institute for Human-Centered Synthetic Intelligence (HAI), the coauthors argue that advances in multimodal fashions like DALL-E will lead to higher-quality, machine-generated content material that’ll be simpler to personalize for “misuse functions” — like publishing deceptive articles focused to completely different political events, nationalities, and religions.

“[Multimodal models] may … impersonate speech, motions, or writing, and doubtlessly be misused to embarrass, intimidate, and extort victims,” the coauthors wrote. “Generated deepfake pictures and misinformation pose larger dangers because the semantic and generative functionality of imaginative and prescient basis fashions continues to develop.”

Ramesh says that OpenAI has been finding out filtering strategies that might, at the least on the API stage, be used to restrict the kind of dangerous content material that fashions like DALL-E generate. It gained’t be simple — in contrast to the filtering applied sciences that OpenAI applied for its text-only GPT-3 mannequin, DALL-E’s filters must able to detecting problematic components in pictures and language that they hadn’t seen earlier than. However Ramesh imagine it’s “attainable,” relying on which tradeoffs the lab decides to make.

“There’s a spectrum of prospects for what we may do. For instance, you may even filter all pictures of individuals out of the information, however then the mannequin wouldn’t be very helpful for a lot of functions — it in all probability wouldn’t know quite a bit about how the world works,” Ramesh stated. “Fascinated about the trade-offs there and the way far to go in order that the mannequin is deployable, but nonetheless helpful, is one thing we’ve been placing quite a lot of effort into.”

Some consultants argue that the inaccessibility of multimodal fashions threatens to stint progress on this kind of filtering analysis. Ramesh conceded that, with generative fashions like DALL-E, the coaching course of is “at all times going to be fairly lengthy and comparatively costly” — particularly if the purpose is a single mannequin with a various set of capabilities.

Because the Stanford HAI paper reads: “[T]he precise coaching of [multimodal] fashions is unavailable to the overwhelming majority of AI researchers, as a result of a lot larger computational value and the complicated engineering necessities … The hole between the personal fashions that trade can prepare and those which might be open to the neighborhood will doubtless stay massive if not develop … The elemental centralizing nature of [multimodal] fashions implies that the barrier to entry for growing them will proceed to rise, in order that even startups, regardless of their agility, will discover it tough to compete, a pattern that’s mirrored within the growth of serps.”

However because the previous 12 months has proven, progress is marching ahead — penalties be damned.

Source link

Tags: consequencesdamnedfastmodelsMultimodalreality
Previous Post

Oracle To Acquire Cerner For Approx. $28.3 Billion in Equity Value

Next Post

New analysis further links Pegasus spyware to Jamal Khashoggi murder

seprameen

seprameen

Next Post
New analysis further links Pegasus spyware to Jamal Khashoggi murder

New analysis further links Pegasus spyware to Jamal Khashoggi murder

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Newsletter

Popular Stories

  • T-Mobile announces another data breach, impacting 37 million accounts

    T-Mobile announces another data breach, impacting 37 million accounts

    0 shares
    Share 0 Tweet 0
  • Study determine the average age at conception for men and women throughout the past 250,000 years

    0 shares
    Share 0 Tweet 0
  • Watch Boston Dynamics’ Stretch unload a DHL trailer

    0 shares
    Share 0 Tweet 0
  • How to Log in to Your Router | Secure your Wi-Fi Network

    0 shares
    Share 0 Tweet 0
  • Tiny11 is out, promising to be Windows 11 without steep hardware requirements

    0 shares
    Share 0 Tweet 0

Artificial Intelligence Jobs

View 115 AI Jobs at Tesla

View 165 AI Jobs at Nvidia

View 105 AI Jobs at Google

View 135 AI Jobs at Amamzon

View 131 AI Jobs at IBM

View 95 AI Jobs at Microsoft

View 205 AI Jobs at Meta

View 192 AI Jobs at Intel

Accounting and Finance Hub

Raised Seed, Series A, B, C Funding Round

Get a Free Insurance Quote

Try Our Accounting Service

AI EXPRESS – Hot Deal 4 VCs instabooks.co

AI EXPRESS is a news site that covers the latest developments in Artificial Intelligence, Data Analytics, ML & DL, Algorithms, RPA, NLP, Robotics, Smart Homes & Cities, Cloud & Quantum Computing, AR & VR and Blockchains

Categories

  • AI
  • Ai videos
  • Apps
  • AR & VR
  • Blockchain
  • Cloud
  • Computer Vision
  • Crypto Currency
  • Data analytics
  • Esports
  • Gaming
  • Gaming Videos
  • Investment
  • IOT
  • Iot Videos
  • Low Code No Code
  • Machine Learning
  • NLP
  • Quantum Computing
  • Robotics
  • Robotics Videos
  • RPA
  • Security
  • Smart City
  • Smart Home

Quick Links

  • Reviews
  • Deals
  • Best
  • AI Jobs
  • AI Events
  • AI Directory
  • Industries

© 2021 Aiexpress.io - All rights reserved.

  • Contact
  • Privacy Policy
  • Terms & Conditions

No Result
View All Result
  • AI
  • ML
  • NLP
  • Vision
  • Robotics
  • RPA
  • Gaming
  • Investment
  • More
    • Data analytics
    • Apps
    • No Code
    • Cloud
    • Quantum Computing
    • Security
    • AR & VR
    • Esports
    • IOT
    • Smart Home
    • Smart City
    • Crypto Currency
    • Blockchain
    • Reviews
    • Video

© 2021 Aiexpress.io - All rights reserved.