Infosys, Microsoft expand collaboration to help boost GenAI adoption globally
Tech majors Infosys and Microsoft on Wednesday announced to expand their collaboration to help accelerate customer adoption of generative AI and Microsoft Azure globally.
Artificial Intelligence (AI) is no longer to be viewed as panacea of all ills. Use and abuse of AI has many facets of advantages and disadvantages.
Artificial Intelligence (AI) is no longer to be viewed as panacea of all ills. Use and abuse of AI has many facets of advantages and disadvantages. Undoubtedly, in the present scenario, where all the data is on line, and when almost everyone, whether in business or in government, or for that matter in any sector or walk of life, is using the internet in a big way, it has become increasingly critical to safeguard online systems against cyber threats and unauthorized access. Artificial Intelligence (AI) has become an integral part of the modern world, with applications ranging from healthcare to finance, and entertainment to security.
In the realm of security, AI has proven to be a valuable tool in helping organisations to protect their assets, data, and infrastructure from cyber threats. When the AI systems are used effectively, they can enhance security by making threat detection, prevention, and remediation automatic. However, with the increasing use of AI in security, there are also concerns about securing AI itself. The intersection of AI in security, and the concept of secure AI is an interesting field to analyse. AI has the potential to greatly enhance security measures by identifying and responding to cyber threats faster, and more accurately, than any traditional security methods.
Machine learning algorithms can analyse vast amounts of data, to detect suspicious patterns and behaviours, and thus help organisations proactively defend against cyber-attacks. Additionally, AI can automate routine security tasks, freeing up human resources to focus on more complex and strategic security initiatives. Broadly speaking, the following are three areas which benefit the cyber security system used by any industry or business house: * Threat Detection and Alerts: AI algorithms can ana lyse vast amounts of data to identify patterns, indicative of cyber threats. Whe ther it is detecting anomalous behaviour in network traffic, or identifying new strains of malware, AI-powered systems provide real-time alerts. Machine learning models learn from historical data, ad apting to evolving threats and improving accuracy over time.
Advertisement
This can help organisations to respond quickly to security incidents and prevent data breaches. AI can also be used to analyse security logs and identify anomalies that may indicate security breach. * Automated Response: AI en ables automated responses to security incidents. For instance, it can block suspicious IP addresses, quarantine infected devices, or trigger incident response workflows. Rapid response minimizes the impact of cyber-attacks and reduces manual intervention. * Behavioural Analysis: AI can analyse user behaviour to detect insider threats. Unusual activity, such as unauthorized access or data exfiltration, triggers alerts. Behavioural biometrics and user profiling enhance authentication mechanisms in the system.
As AI becomes more integrated into security measures, it has also become a target of the cyber criminals. Hackers can exploit vulnerabilities in AI algorithms to manipulate or evade security controls, leading to potentially devastating consequences. For example, attackers can use adversarial attacks to trick AI systems into misclassifying data, or making incorrect decisions, which can compromise security measures.
Securing AI goes beyond protecting the models and training data. It involves considering the entire enterprise application stack where AI operates. Organisations should implement following strategies for securing AI:
* Regularly update AI algorithms and models with latest security patches to protect against vulnerabilities.
* Ensure that AI algorithms are developed using secure coding practices to prevent vulnerabilities.
* Regularly monitor the behaviour of AI algorithms to detect any anomalies that may indicate a security breach.
* Encrypt sensitive AI data to prevent unauthorised access to AI systems and to protect it from cyber threats.
* Implement multifactor authentication to protect accesses to AI systems, and to prevent unauthorised users from tampering with the AI algorithms.
* Restrict access to AI algorithms, models, and data to authorised users only As AI continues to evolve, its role in cybersecurity will become even more critical. AI plays a dual role in managing insider threats, acting both as a shield and a sword: In the role of the Shield, User and Entity Behaviour Analytics (UEBA) tools leverage AI and machine learning to monitor behaviour within a network.
They detect patterns signalling ongoing insider attacks, such as sudden data exfiltration or anomalous login activities. In the role of the Sword, Large Language Models (LLMs), like ChatGPT, generate human-like text but can inadvertently facilitate insider threats. In the wrong hands, LLMs may be manipulated for social engineering or data theft. In summary, while AI offers benefits, responsible development and deployment are crucial to mitigate security risks. It enables the managers to take informed decisions.
(The writer, a retired IPS officer, has served in various capacities, including as commissioner of Delhi, police, DG BSF, DG NCB, DG BCAS and special director, CBI)
Advertisement