Love it, Fear it, Hate it! But we no longer can afford to ignore it! Since being coined in 1956 by John McCarthy, definition of the term ‘Artificial Intelligence’ (AI) has remained as diverse as the understanding of the technology frontier.
While the scope of understanding (and thus opinion!) varies as per technical knowledge, ethical and political standing and even fancy of its proponents, AI refers to the branch of computer science which deals with simulating intelligent human behavior in computer-driven machines.
The past five years have witnessed AI moving from being a perpetually hyped laboratory concept to a revenue-generating reality, triggering interest by non-technology giants such as Amazon and Facebook, which see it as critical platform for solutions towards faster and cheaper infrastructure, intelligent and thus more real-time and relevant advertising.
As industry and technology experts debate about the hype, reality, fancy, ethics and security concerns related with AI, we at Medha Research are summarizing the top trends over the next 12 months, which businesses can benefit from.
Heightened focus on AI-enabled Hardware and AI Libraries
AI processing differs significantly from a standard computing or GPU processing, triggering the need and focus on specialized chips which are AI enabled. Emphasis is on choosing the optimal processing architecture (including GPU (Graphics Processing Unit), DSP (Digital Signal Processor), FPGA or custom ASIC (Application Specific Integrated Circuit).
The past one year has witnessed leading chip manufacturers such as NVIDIA, Qualcomm, ARM and AMD churn out specialized chipsets which are finding extensive applications related with natural language processing, speech recognition and computer vision across multiple domains.
An AI-enabled solution is not designed in isolation, thus triggering a spurt in technological advancements and market demand for related components such as SSD (Solid State Drive) devices, interconnects and CPUs.
Government bans reignite debate on use of Facial Technology
Facial recognition, an AI application is geared towards identifying a person either with his or her digital image or through patterns of facial features. While the technology has traditionally been associated with security industry, the past 18 months have witnessed a spurt in multiple applications in industries such as health and retail.
Of late, a move by police and local government agencies in US to use facial recognition technology for crime profiling has run into multiple hurdles. The perceived risks of technology-driven surveillance seem to outweigh the benefits at this point of time, thus triggering efforts to regulate the technology. While San Francisco was the first US city to ban use of facial recognition technology by police and local government in 2019, more cities and communities have been supportive of a regulated approach rather than a blanket ban.
Facial recognition, as with any other AI-related technologies is still in its early evolution phase, triggering fears about accuracy and bias. For instance, while the technology appears to be accurate at identifying white male faces, it has often misidentified people of color and women, and thus susceptible to potential abuse by law enforcement agencies.
Increased convergence at the Edge
Edge computing, where data gets processed as close to the source as possible, offers more efficient and practical ways to process data rather than moving it to a centralized data-storage server. While IoT facilitates streaming of data from edge devices to core analytics systems, a combination of AI and IoT is enabling the edge devices themselves to be intelligent and capable of processing relevant data.
Advanced Machine Learning (ML) models incorporating deep neural networks can be optimized to run at the edge. These models can process speech synthesis and video frames, time-series data and unstructured data which are generated by devices such as microphones, cameras and other sensors. Besides reduction and even elimination of latency, other key benefits offered by convergence of AI and IoT at the edge include enhanced security, compliance with regulations governing data transfer, and a lower possibility of data being corrupted.
Interoperability among Neural Networks holds key for growth
Existence of multiple frameworks to develop neural network models results in a lack of interoperability of neural network toolkits, delaying rapid adoption of AI. Open Neural Network Exchange (ONNX), a community project created by Facebook and Microsoft enables the tools to work together through allowing them to share models.
Automating Machine Learning ensures key focus on business problem
Automated Machine Learning (AML) empowers developers to evolve machine learning models capable of addressing complex scenarios, eliminating the need to go through the process of training ML models. AML thus empowers business analysts to focus on business problem, bypassing the need to understand complex process and workflow.
With data science emerging as a key differentiator across multiple industries, automation of various tasks such as model building and data integration has been a priority for data and analytics software platform vendors. A study by Gartner predicts that about 40% of data science tasks will be automated by 2020. Autonomous driving applications, sales forecasting and lead prioritization systems are among the key applications of AML.
While Automated Machine-Learning systems are capable of iteratively testing and modifying algorithms and selecting the best-suited models, these systems operate as ‘black boxes’. As the selection techniques are hidden, the users may not trust the results, thus hesitating to tailor the systems to their search needs. Researchers from MIT are currently working on an interactive tool which empowers users to understand and thus control operations of automated machine learning systems.
AI – IoT combination powers Investments in Predictive Maintenance
Globally, the cost of machine downtime is a staggering $647 billon as reported by the International Society of Automation. While deployment of IoT has empowered companies to process massive amounts of sensor data, AI enables them to go beyond maintenance and actually predict the possibility of downtime, thus driving reliability in mission-critical systems.
AI predictive maintenance can improve efficiency of technical personnel, extend equipment life of a manufacturing system, besides minimizing costs incurred because of machine downtime. Real-time data processing and automation, when linked with historic data provided detailed insights into the root cause of machine failures.