Is your information really secure?

Cyber security risk management is no longer confined to solid firewalls and state of the art Virtual Private Networks. A video that recently caught my attention may make you re-think cyber security programs that you have (or intend to have). Have a look …. Video credit: CNA Insider.

Here are factors that one should focus on and strategize before embarking on building/strengthening cyber security risk assessments. Break them down into segments based on users, data, location and devices. Security risk assessments must have a holistic approach to include human vulnerabilities as well – not just focus on machines and devices.

  1. What is the kind of data you want to protect – your business assets (physical, financial and information), employees’ data, client/customer information?
  2. Where is your data located? In the cloud or on premise? Think and evaluate your cloud security concerns, whether you are in a shared tenancy or private cloud. Even if your cloud service provides the basic risk management techniques, you are still responsible if your data in the cloud gets leaked.
  3. Do the applications your run (or intend to run) have basic security in-built? Do they provide a context-based sign-in before granting access? Do the applications provide the flexibility to set up multi-factor authentication on different devices like mobiles, tablets and laptops?
  4. Have you categorized your users? (like how many are temporary / contractual / permanent etc.) Who needs to have privileged access to critical data and transactions?
  5. What kind of devices do users use for performing their tasks – whether within the perimeter or firewall of the company or from the outside?
  6. Should you use a “zero-trust” security policy? When employees are allowed to “bring-your-own-device (BYOD)” (as some companies do), can you take the risk of an infected device that may share information with a hacker or subject your organization to a malicious attack?

When evaluating security solutions keep in mind

  • Solutions that offer to protect the “perimeter” of the company (like firewalls, anti-virus / malware software, anti-phishing devices, network sniffers, etc.) – which is mainly the border around its physical locations and intranets – are not sufficient. Most of such security solutions are not capable of understanding application security breaches and proactively inform the CISO’s office of the risks in order to plug the breach immediately.
  • Large companies having a geographical spread have a different set of requirements to deal with as compared to small or mid-size companies.
  • Companies that still rely on old / legacy systems that are not amenable to the latest technology upgrades, that are proprietary in nature make the security scenario complex.
  • Look for solutions that helps you centralize the various types of log information in real-time (or close to real-time) from multiple systems. They must be capable of tracking inventory of multiple devices (like networks, servers, terminals, mobile devices, laptops, access and audit logs, wireless access from extranets, etc.)
  • They should be able to track users, their roles and the usage of the various actions / tasks within the system. They should ensure that context-based risk assessment is done periodically. Ensure you have up-to-date information about everyone (including employees, customers and suppliers) who has access to your systems and about the devices they use.
  • Placing your single sign-on outside of your perimeter (on the internet) may require a lot of thought, not only due to the complexity of scenarios, but also due to legal compliance requirements (like data privacy laws).
  • Migrating to the cloud environment requires you to evaluate and assess security risks carefully and whether your cloud service provider is experienced enough to look at the larger security aspects – not just employee access but also B2B or B2C scenarios used by your organization.
  • Do not make security risk assessments a quarterly or annual affair, it should be an on-going exercise. It is best implemented as part of a daily operation, so that you are proactively alerted to react to breaches before severe damage is done.

My take on IRM and GRC

The next buzzword after GRC (Governance, Risk and Compliance) is now IRM (Integrated Risk Management). (Not to be confused with another acronym “IRM” which denotes “Information Rights Management” which is a form of IT security technology for protecting access to sensitive documents and emails.)

Why are we emphasizing so much on new acronyms and confuse practitioners of risk, control and compliance? Why debate on whether GRC is dead and IRM is the new norm? Would it not be better to get down to basics and understanding the importance of and concepts that each of those words denote? (People generally like to put old wine in new bottle to keep the interest going.)

Technology -when properly deployed – has and is always capable of giving an integrated view of things in an organization.

But jumping into a technology approach without proper understanding by all stakeholders concerned leads to quick disillusionment and project failure.

It is a fact that silos exist is several organizations. This is mainly because different departments (such as finance, internal audit, risk committee, operational heads) cocoon themselves into their own departmental priorities and have a short-sighted approach. Their reasons and defences are many – inertia to collaborate with other stakeholders, ego issues on whose approach is better, having a “get-it-done-with” approach, citing shortage of staff, insufficient budget that makes them adopt sub-optimal solutions, etc. The top reason could also be that the C-level is not apprised of the benefits or they do not consider these initiatives adding to their top line revenues!

Quoting Gartner’s definition – Integrated risk management (IRM) is a set of practices and processes supported by a risk-aware culture and enabling technologies, that improves decision making and performance through an integrated view of how well an organization manages its unique set of risks.

Since the summer of 2018, Gartner has been moving away from GRC (Governance, Risk and Compliance) towards IRM (Integrated Risk Management).

In my perspective, if one forgets the acronyms – GRC and IRM – and look at what are the concepts that are being espoused, one can very well see that the fundamentals have not changed, but the emphasis is on a holistic approach towards a better management of risks arising out of poor governance, failed business controls, non-compliance, weak IT security leading to data breaches, external threats, etc.

To elaborate further, what all of us (or most of us) understand / agree at a high level are the following points

  • There is no “business” or “for-profit” organizations without taking calculated risks. Managing those risks intelligently and on time ensures business continuity and success. That is why “Risk Management” is ideal in all decision-making processes.
  • In the long run, only integrity pays and ethical practices in business help its brand value and survival – others simply vanish. This is what we understand as the “Governance” standards set by the entrepreneurs, promoters and expected to be followed and communicated by the top management to the operational teams.
  • The “Governance” has two aspects to it – one set of internal practices and policies set up by the management and the other set of operational, tax and statutory compliances set up with respect to any or specific industries, countries and communities. This broadly comes under the “Compliance” umbrella.

On a deeper level, one can see that all the above points are intertwined and one cannot exist without the other –

  • Governance cannot be enforced without proper policy formulation and communication of the internal policies (corporate specific procedures and ethical practices) that the management envisions and laying emphasis on external compliances to ensure business continuity. It is a failure of governance if business risks are not identified, assessed and mitigated on time. Governance also implies that proper internal controls are in place and working effectively.
  • Compliance does not stand alone – failure to comply – whether with internal policies (such as purchase or pricing policies) or with external statutes (such as taxation, etc.) – is a reflection of poor business controls.
  • Risk awareness is the overarching umbrella that recognises threats to the business continuity – whether arising from poor governance, improper compliance, inadequate IT security measures to protect data and ineffective business controls in its processes that could lead to frauds.

The bottom line for all organizations wishing to set up a framework for Governance, Risk Management and Compliance may need to consider the following:

  • have a holistic understanding and approach of the proposed integrated framework, include all functions and processes – not just finance or internal audit or SOX compliance. External threats such as legal risks, brand risks, cyber security, IT risks, conflict of interest that results in abuse and fraud, environment, health and safety risks deserve equal importance when we talk about a sustainable business in the long run.
  • bring all stakeholders on one page – workshops, discussions, whitepapers, surveys, opinions, etc.,
  • don’t jump into a technology solution without assessing preparedness and maturity of all functions,
  • as far as possible avoid siloed programs (that are focussed only on a particular function or department),
  • even if you have to start small (if there are budget or resource constraints), never compromise on the big picture of where you want to be at the end of the program,
  • keep in mind an integrated approach that ties together all types of internal or external risks to the enterprise.

A Primer on AI/ML/DL/NN etc.

Today, many of us non-technical people feel quite left out of conversations that are buzzing around in companies, social media, webinars, presentations, etc.

Yes – I am talking about the most talked about acronyms – Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), Neural Networks (NN) and so on that also includes Big Data, Statistical methods, Data Science, Predictive Analytics and so forth.

My attempt to facilitate understanding of the basics.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

  1. Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. If a system or a device can do “smart” things like humans do, then it is said to be artificially intelligent.
  2. It is an umbrella concept that includes image processing, natural language processing, robotic process automation, machine learning, neural networks and many more.
  3. There is a wrong impression that AI is a system, but it is implemented in a system. Particular applications of AI include expert systemsspeech recognition (Natural Language Processing (NLP) and machine vision.
  4. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction.

WHAT IS MACHINE LEARNING (ML)?

  1. To put it very simply, machine learning is defined as “the ability (for computers) to learn without being explicitly programmed.” Machine Learning deals with making your computers (or machines) learn from external environment data being provided – like connections to sensors, electronic components in devices, storage devices, etc. It also crunches huge input data sets that are provided to it to come up with patterns and predictions – like Amazon suggesting what your buying preferences are or Netflix offering options based on your previous viewing history, etc.
  2. Machine Learning is simply a way of achieving Artificial Intelligence. The main objective of ML is to allow the computers to learn automatically without human intervention, assistance or programming and adjust actions accordingly.
  3. ML builds models and inbuilt algorithms that it keeps constantly updating and fine- tuning based on what inputs you provide on an on-going basis.
  4. Machine learning enables analysis of massive quantities of data.

WHAT IS DEEP LEARNING (DL)?

  1. Deep learning is a specialized form of machine learning – for example – a machine learning starts with relevant features being manually extracted from images. The features are then used to create a model that categorizes the objects in the image.
  2. Whereas with a deep learning approach, relevant features are automatically extracted from images. In addition, deep learning performs “end-to-end learning” – where a network is given raw data and a task to perform, such as classification, and it learns how to do this automatically.
  3. Deep Learning is also sometimes referred to as “Artificial Neural Network”. Another key difference is deep learning algorithms scale with data, they often continue to improve as the size of your data increases.
  4. Deep learning is applied in many areas of artificial intelligence such as speech recognition, image recognition, natural language processing, robot navigation systems, self-driving cars etc. Some examples that we see in our daily lives are virtual assistants like Alexa, Siri, Cortana, driverless trucks, drones and automated cars, automatic machine translation, Character text generation, facial recognition, behavioural analysis, etc.
  5. Big Data is required for Deep Learning. Massive data is to be fed into models – however the bottleneck remains in cleansing and processing the data into the required format for powering the DL models.

WHAT ARE NEURAL NETWORKS?

  1. A neural network is a type of machine learning which models itself after the human brain. Neural networks with their deep learning cannot be programmed directly for the task. Rather, they have the requirement, just like a child’s developing brain, that they need to learn the information.
  2. They have become important and standard tools for data mining. Neural network is an adaptive system that changes its structure on external or internal information that flows through the network during the learning phase.
  3. A neural network usually involves a large number of processors operating in parallel and arranged in tiers. The first tier receives the raw input information — analogous to optic nerves in human visual processing. Each successive tier receives the output from the tier preceding it, rather than from the raw input — in the same way neurons further from the optic nerve receive signals from those closer to it. The last tier produces the output of the system.
  1. Handwriting recognition is an example of a real-world problem that can be approached via an artificial neural network. The challenge is that humans can recognize handwriting with simple intuition, but the challenge for computers is each person’s handwriting is unique, with different styles, and even different spacing between letters, making it difficult to recognize consistently. Handwriting recognition has various applications, as varied as automated address reading on letters at the postal service, authorization signatures on documents, reducing bank fraud on checks, etc.
  1. Technology uses have expanded to many more areas such as chatbots, stock market prediction, delivery route planning and optimization, drug discovery and development and many more.

WHAT IS DESCRIPTIVE, PREDICTIVE AND PRESCRIPTIVE ANALYTICS?

  1. Descriptive – based on insights into historical data – What has happened?
  2. Predictive – based on statistical tools and forecasting techniques to answer – What could happen?
  3. Prescriptive – use simulation and optimization algorithms to advise on possible outcomes and answer – what should be done?

WHAT IS DATA SCIENCE AND WHAT CAN YOU DO WITH IT?

  1. Data Science is a study which deals with identification, representation and extraction of meaningful information from data sources.
  2. Some of the tasks you can do with Data Science include: Coming up with conclusive research and open-ended questions, extracting large volumes of data from external and internal sources, deploying statistical, machine learning and analytical methods, clean, prune and get data ready for processing and analysis, looking at data from various angles to determine hidden patterns, relations and trends, etc.
  3. If you are wondering what is the difference between Data Analyst and a Data Scientist, there are quite apart from the goal or objective with which they work. A Data Analyst starts by aggregating, querying and mining data for reporting on various functions. A Data Scientist starts by asking the right questions and therefore the Data Scientist needs substantive expertise and non-technical skills.

Analytics for fraud investigations

Many have wondered why one would perform analytics for fraud detection (or prevention) in good times (business as usual) and why would you when there is no whistle blown about a fraud suspicion?

Is this not a grey area where people sensitivities are involved and news about investigations can affect the organization’s brand image? Being trolled over social media that becomes painful to counter? But the CFO’s office is the hardest hit when it comes to answering the Board on the financial losses incurred due to fraudulent activities that leaves a gaping hole in finances.

Traditional anomaly detection is conducted routinely by internal or external auditors. But they are insufficient, not backed by powerful tools and the objective and terms of reference for these audits limit the investigation to a certain level and no more.

Often referred to as “Forensic Audit”, fraud detection methods assume great significance because it requires digging deeper than normal audit to examine and investigate internal control failures, conflict of interest, social networks, multiple factors such as behavioural analysis and ability to crunch big data that can extend / expand beyond the time period under the lens.

A prudent and practical approach would be to set up a mechanism that can proactively provide analytics and flag off high risk areas that need immediate attention.

Fraud Analytics is the use of analytical technology with intelligent business rules and techniques, which will help detect improper transactions like bribery, favouritism, working capital leakage, asset misappropriation, etc. either before or after the transaction is done, so that appropriate steps can be taken to prevent further damage.

Fraud Analytics also helps in performance measurement, evaluate internal control failures and deficiencies, standardize and help in constant improvement that would benefit the overall organization and governance.

Fraud perpetrators use a lot of different and unique techniques which are randomized to prevent discovery and therefore, the techniques used for detection has to be one or many of the following:

  1. Capable of running automated business rules that throw up anomalies that can be further investigated for false / true positives.
  2. Calculation of various statistical parameters like averages (for example average number of calls made, emails exchanged, delays in bill payments, etc.), quantities (for example comparison of total quantities ordered / received / invoiced / returned), performance metrics (e.g. attrition rate pattern amongst certain departments, sales returns peaking immediately after monthly close, etc.), user profiles (e.g., interested party contracts, sudden lifestyle changes by the user, behavioural patterns noticed) etc.
  3. Trend analysis using time series distribution.
  4. Clustering and classification that can help find patterns and associations within data sets.
  5. Algorithms, models and probability distributions of various business activities.
  6. Machine learning and neural networks to automatically identify characteristics of fraud and used later with increasing Big data inputs.

Having a Fraud Prevention program for controlling fraud risks is an important part of Enterprise Risk Management and provides your investors, partners and auditors with more confidence on your demonstrated ability to tackle the same in a sustained manner and not on an ad-hoc basis.

Blockchain – Basics

Blockchain is a much-used word and a hot topic for the last few years. (On the lighter side, many of you ladies out there who are not technically inclined – do not for a moment think it is another piece of jewellery you may have missed out :-)))

BLOCKCHAIN is simply a technology platform that contains BLOCKS of data / information that is chained together and the chain increases with the addition of more BLOCKS (whole lot of technical stuff to ensure integrity behind this).

I thought it best to pen down a few fundamentals of what exactly is Blockchain technology, in the first place – before going into what are the benefits and risks associated with it as of today.

  1. The term blockchain and bitcoin are not synonymous or interchangeable. Bitcoin is a cryptocurrency token (like there are many other digital currencies available and emerging in the world).
  2. You may wonder what is cryptocurrency – it is a medium of exchange like traditional currency, it is designed to exchange the digital information through a process made possible by cryptography. Cryptocurrency is a bearer instrument, meaning that the holder of the currency has ownership and no other record is kept of the identity of the owner.
  3. Blockchain, on the other hand, is the ledger (or technology) that keeps track of who owns the digital tokens at any given point in time. Therefore, you need blockchain technology in order to transact in Bitcoins.
  4. Blockchain can be defined as an interlinked chain of “BLOCKS”. These “BLOCKS” contain data or information on transactions between persons, businesses, Governments or other users and it has a technique that digitally timestamps documents that is not possible to backdate or erase or tamper with them in anyway. This provides integrity, security and a risk-free transaction recording.
  5. This is possible since all information transferred via Blockchain is encrypted and a digital distributed ledger keeps every occurrence recorded and immutable making it almost risk-free as compared to traditional methods of transacting.
  6. Blockchain enables peer-to-peer transactions between parties that are even unknown to each other. Unlike in traditional methods where there needs to be a central authority or trusted middlemen to complete transactions, Blockchain guarantees correct transactions through an automatic program.
  7. Typically, when you want to do a bank transfer from one country / Bank to another person, you have to necessarily go through a chain of transactions like your Banks’ correspondent bank remitting it to the receivers’ correspondent bank and then it finally reaches the receivers’ own Bank account. In a blockchain scenario, observe the diagram below (released for public understanding by ICICI Bank in India).
  8. Blockchain can be used for the secure transfer of funds, property, contracts, etc. without the intervention of a third-party intermediary like a bank or Government. The data recorded inside a Blockchain is immutable and irreversible.
  9. Blockchain is decentralized, so there is no need for any central, certifying authority, eliminating the single point of failure in a centralized setup.
  10. The data that is stored in a BLOCK depends upon the type of Blockchain – it can be a Bitcoin Blockchain or a healthcare blockchain or a Government record management type. It can be a public blockchain which is transparent and anyone can use the same, or a private blockchain or consortium which restricts it to authorized or a community of users.
  11. Blockchains cannot be run without Internet and is a software protocol that uses database, software applications and some connected computers.
  12. Blockchain technology first evolved from a distributed ledger concept that was used in payments in cryptocurrencies like Bitcoin. Then came Smart Contracts that are executable programs that check and verify conditions. Now there are what is called Dapps (or decentralized applications) running on peer-to-peer networks and are just like any other app, with front end and backend codes.
  13. There is a myth that Blockchain solves every problem, and smart contract is always legal. The reality is that this technology is so fast emerging that there are still grey areas that need to be addressed.

While India’s position is positive towards Blockchain technology it is cautious in it approach to digital currencies like Bitcoin. However, a lot of pioneering work in various industries and sectors are already in progress and both public and private sectors in India are actively contemplating the use of Blockchain for various use cases like land registration and property management, e-KYC for SEBI (in the wake of large scams), supply chain finance, international trade finance and foreign currency remittances by banks, e-Governance by linking databases built around the citizen identity project Aadhaar and so on.

Information Security

What is the best practice approach that can help create a solid framework for establishing Information Security policies, procedures and practices?

One needs to recognize the various aspects of information security as enunciated in COBIT and other world-wide standards and understand the impact of data privacy laws on information security.

Information security is

  • the practice of preventing unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction of information.
  • the balanced protection of the confidentiality, integrity and availability of data without hampering organization productivity.
  • a multi-step risk management process that identifies assets, threat sources, vulnerabilities, potential impacts, and possible controls, followed by assessment of the effectiveness of the risk management plan.

Data protection and privacy is an integral part of Information security measures.

  • Wherever personal identifiable information or sensitive data is collected, stored, used and finally deleted or destroyed, privacy issues arise if there are improper controls or insufficient disclosures on how the processes are handled.
  • Information from sources such as financial records, credit card information, healthcare, payroll information, social security numbers, Aadhar card information, biological traits, geographic locations and residence, voting preferences, religious background information, web-surfing behaviour, etc. all fall within the purview of personal data that is subject to privacy laws in various degrees.
  • Several laws prevail in different countries on ensuring data privacy and protection – the latest and most comprehensive one being the GDPR for EU nations.

The COBIT framework for Information Security by ISACA states 5 important points to be followed.

  1. Meeting stakeholder needs.
  2. Covering the enterprise end-to-end.
  3. Applying a single integrated framework.
  4. Enabling a holistic approach.
  5. Separating governance from management.

MEETING STAKEHOLDER NEEDS

Stakeholders at different levels expect different fulfillment of requirements. These business objectives must be translated into IT related goals that would enable achievement of the business goals. Top level stakeholders start with the Board of Directors, CEO, CFO, followed by the CIO, CTO, CISO. Next level could be security managers and system administrators – followed by end-users.

A top-down approach is the most sustainable and successful approach because it ensures

  • Clearly laid out policies, procedures and timelines
  • Dedicated funding and clear planning
  • Determine who is accountable for each of the processes
  • Enforcing change management throughout the organization for smooth adoption.

 COVERING THE ENTERPRISE END-TO-END

  • One has to start understanding elements of the Information system – this comprises hardware, networks, software, databases, people and procedures connected therewith.
  • Next comes the evaluation of vulnerability and checking the adequacy of controls established for network security, WiFi networks, firewalls, the perimeters of your system landscape.
  • Recognize the impact of laws related to data protection and privacy in the locations where your business operates or intends to operate.
  • The IT department in the organization should aim to cover all functions and processes of the business – include internal and external access to processes.
  • All information and the related technologies to be treated as “assets” just like any other asset in the business. Information is the “crown jewel” of your organization and must be protected at all times.
  • Threat evaluation is not just limited to the periphery of your system landscape – but more importantly
    • continuous, real-time monitoring of business application activities done by people, remote calls between two systems, external threats and attacks, identify social engineering tactics, etc.
    • Providing end users adequate authorization, ensuring no or minimal segregation of duties risks, masking of sensitive information for unauthorized users in compliance with privacy laws.
    • recognizing patterns of logs in the normal course and finding out anomalies, identify attacks done by external or internal users (pseudonymize users during investigation).
    • Cyber security professional watching over a consolidated cockpit that integrates all events and logs for meaningful interpretation and action.

APPLYING A SINGLE INTEGRATED FRAMEWORK

COBIT 5 for Information Security provides an overarching governance and management framework that provides best standards and practices to be adopted. COBIT encompasses many models such as ITIL, ISO/IEC 27000 series, the ISF Standard of Good Practice for Information Security and US National Institute of Standards and Technology (NIST) SP800-53A.

While evaluating a single integrated framework, one should keep in mind a holistic approach that can be broken down into achievable programs that suit the organization in the short, medium and long term. A non-technical discussion on the requirements must precede before looking at technical solutions that would address the pain points faced by different stakeholders.

ENABLING A HOLISTIC APPROACH

COBIT recommends a holistic approach that takes into account the following:

  • Considers Principles, policies and frameworks
  • Looks at processes, organizational structures, culture, ethics and behaviour.
  • Deals with all information produced and used by the enterprise.
  • Includes all the infrastructure, services and applications that provide the enterprise with IT processing.
  • Ensures people, skills and competencies are available for successful completion of all activities and taking corrective decisions.

SEPARATING GOVERNANCE FROM MANAGEMENT

These two disciplines involve different activities that may serve different purposes applicable for different departments or organizations.

  • Governance is the responsibility of the Board and top management.
  • Management is the responsibility of the executive management under the leadership of the CEO or CFO, etc.

While governance sets the tone at the top for agreed objectives, prioritization and decision making, management has to plan, build, run and monitor the activities in alignment with the governing body.

Know the difference ………..

Many people have asked me whether internal controls monitoring is sufficient to unearth suspicious transactions, abuse of processes or frauds. Do you really need another fraud investigation exercise?

Both exercises have different objectives and perspectives and answers different needs (e.g. do we need to prevent or detect, examine historical or current data, use predictive or presumptive approach, bring in concurrent or forensic audit, etc.)

To answer this question, my take on this is as follows:

Continuous monitoring of internal controls of an organization focuses on

  1. Determining sufficiency and deficiency of internal controls on structured business data such as financial accounting, human resources, payroll, treasury operations, etc.
  2. Following a systematic, repetitive approach for testing the effectiveness and efficiency of controls.
  3. Getting periodic self-assessments and certifications for organizational level assertions.
  4. Scanning data after business transactions are performed or committed, thus mostly providing a detective mechanism
  5. Notifying failures of internal controls to responsible persons on an exceptional basis generally based on materiality concept.

Fraud investigation, on the other hand, is more than just monitoring business controls in an organization.

  1. Investigations on suspicious transactions can be far-reaching in terms of timeframes. While internal controls monitoring is usually for current quarter / half year or year, a forensic investigation may necessitate going back into several years to assess patterns adopted by the fraudster and quantify damage caused.
  2. In order to crunch high data volume, one may need to adopt some technology or computer-aided tool for enabling data mining, analysis, simulation, predictive analytics, complex business rules, etc. for determining trends and patterns.
  3. Performing a fraud detection or screening of transactions as a preventative measure before the business transaction is completed is a must in some scenarios – examples – screening high volume payments, credit card approvals / blocking, Bank ATM network validations, etc.
  4. External sources of information and unstructured data like emails, phone calls, whistle blower tips or data when conjoined with internal business transactions may point to failure of multiple controls leading to abuse of power, processes, bribery, corruption, misappropriation of cash or assets.
  5. Individual controls may be very effective, but a combination of controls may point to a different story – for example
    1. controls in the purchase process may be effective, but the purchase officer may have a collusion with a preferred vendor or with another employee.
    2. Multiple approval workflows may be working fine, but splitting invoices or contracts to bypass approval levels may be happening to push through business transactions that may be violative of company policies and favour outside parties.
    3. Administrators authorized for maintaining master data may do a flip flop change in payee’s name to direct payment to themselves once in a while that goes unnoticed.
    4. Working on holidays or late shifts and suspicious write offs – say inventory or consumables – to cover up thefts from warehouses, plants, etc.
    5. Leakage of financial / competitive information either overtly or covertly, sharing of passwords, succumbing to social engineering attacks, conflict of interests not declared, etc.
  6. Fraud investigation also needs to be flexible enough to add more factors to the analysis or change the thresholds and parameters in the logic for determining exceptions.
  7. Fraud investigation usually starts off with examining existing internal controls and can throw up new insights into the deficiency of internal controls to be strengthened. There is a two-way benefit for groups involved in testing controls and fraud investigation.
  8. In the event of a fraudster being involved, the human behaviour / psychology and the observation and interpretation thereof, plays a large part in concluding the investigations. The user identity needs to be pseudonymized and business operations must go on unaffected until the case is closed.
  9. Upon conclusion, the case may lead to criminal proceedings that requires gathering and submission of evidence in a Court of law. Fraud examiners need to have a basic understanding on the various laws and legal provisions that are attracted for the specific case under investigation.

In summary, internal controls monitoring and fraud investigations are like two arms – inputs from both being useful to each other.

I would rate fraud investigation or forensic audit as a wider and broader platform (as compared to internal controls monitoring), going by the objectives of the exercise and the challenges presented by the sheer volume of data (external and internal) to be analyzed.

Risks caused by frauds

I have wondered many a times what makes this topic interesting at once but dealt with in hush-hush tones when there is an anonymous whistle blown.

Why do organizations and those in the higher echelons postpone / neglect or trivialize the need to look at this risk a little closer (even before an incident happens)?

True (and rightfully so) all organizations give the utmost importance to improving their top line / bottom line revenues and profits, but one fraudster can create a devastating setback to what was built up over the years – reputation, goodwill, customer faith, vendor relationships and so on.

Behavioural analysis can reveal a lot about why such risks can happen and tell-tale signs of perpetrators. According to a study conducted by Association of Certified Fraud Examiners (ACFE) in 2016 for Southern Asian countries, (Courtesy: Report to the Nations on Occupational Fraud and Abuse), fraud perpetrators often show red flag behavioural characteristics associated with their crimes – living beyond their means, unusually close association with vendors, financial difficulties, etc.

In recent times there are many interpretations of what ultimately leads to a fraud. Here are examples of some of them.

  1. Failure of business integrity.
  2. Lack of ethics.
  3. Suspicious business transactions.
  4. Lack of business partner screening and approval.
  5. Unaware of company’s business between parties related to the organizations’ management and employees.
  6. Suspicious movement / physical entry of persons whether during or after business hours.
  7. Excessive authorizations / Breach of passwords / networks / servers / applications caused by either internal staff or external hackers.
  8. ………………………and the list can go on.

When broken down into several root causes like the ones cited above, it becomes easier to tackle the overarching subject of “risk of frauds”. You would realize that several arms of the business functions are responsible for proactively tackling these risks.

A closer analysis of the root causes for these risks related to frauds will point to the underlying factors:

  1. Insufficient or lack of business controls (aka internal controls).
  2. Lack of awareness of ethical standards and integrity in business dealings (lack of Governance principles).

Risks, Controls and Governance are intertwined and cannot be dealt with as isolated topics. In my opinion, there cannot be a debate on which one is more important than the other. One needs to have a holistic view of all three aspects – even if you are not able to tackle all of them at the same time due to either resource or cost constraints in the organization, at least be aware about the inter-relationships.

Even large multinationals keep these topics at arm’s length between internal audit, Board and Audit committee and operational departments, which I think confuses the whole issue at hand. Probably one of the reasons why topics like risk management programs, SOX compliance, technology implementations appear so daunting.

Clearly one has to structure these at a high level and follow a vision statement for effectively bringing in good governance, business controls and risk management programs in a phased manner, but never losing sight of the benefits of an integrated view.

Risk Analysis-A short overview

The topic on risk analysis is always fraught with multiple dimensions and choices.

Each industry – and specific risks that are typical of those industries – are to be looked at differently and there is no one-approach-fits-all answer to risk analysis.

In the Banking industry – for example – the definition of Credit risk refers to the risk that a borrower may not repay a loan and that the lender may lose the principal of the loan or the interest associated with it. Credit risk arises because borrowers expect to use future cash flows to pay current debts; Read more at //www.investopedia.com/terms/c/creditrisk.asp#ixzz5GtGVzh7Q

In this article, I am not dealing with industry-specific practices of risk analysis, but generic operational risks that are common to all industries or organizations.

Risks could be analysed through multiple approaches – at the end of the risk analysis you would have calibrated and possibly arrived at the probability of outcome of each material risk that you had defined (see earlier article on identifying and defining risks).

  • Quantitative – putting a rupee or dollar impact based on the probability of occurrence of the risk event happening.

  • Qualitative – not able to estimate a financial number right away – but assessing the damage that could happen – for example – customer dissatisfaction, damage to reputation, product bill of material or recipe stolen by competitors, key personnel poached by competition, insiders leaking information, etc. These types of risks ultimately would result in a financial loss, but are hard to quantify right at the beginning of risk assessment, but at a later stage.

  • Three point analysis – you want to take a measured approach and take a ‘best case’, ‘worst case’ and ‘most likely case’ and calculate a weighted average approach to rank your risk.

  • Speed of onset of the risk – a very important factor that influences prioritization of responding or treating the risk.

  • Use advanced statistical methods, monte carlo analysis, scenario modelling to analyze the risk on several factors.

  • Use Machine Learning (ML) on past data and predict possible outcomes in areas where risk is expected to be trending.

How does one start with risk analysis?

  1. You may want to conduct a workshop or a collaborative survey with key stakeholders in different functional areas to arrive at inherent risk analysis – which is basically saying what do they understand as the risk drivers or causes, what are the possible consequences or impacts, where does this risk stand at present? What is the probability of its occurrence and impact.

  2. This becomes the starting point for conducting continuous or periodic risk assessments by risk owners or groups responsible. Risk owners or managers may be more comfortable giving qualitative rankings for probabilities or impacts in understandable terms rather than as percentages or scores. Have a mapping mechanism to convert them for arriving at impact measurement in quantitative or qualitative terms.

  3. Have easily understandable measures of impacts to the business and its effect on strategic objectives. Impact measures should not be limited to only direct financial losses, but should include qualitative measures such as loss of production hours, time delays in hours, productivity measurements, media exposure time, geopolitical factors, customer satisfaction index, vendor reliability, customer credit rating, etc. These would ultimately be converted into financial numbers once you start assessing the risks.

  4. Risk assessments would set targets for each risk on what is the acceptable level the organization can live with – this is sometimes referred to as ‘planned risk’.

  5. Response treatments, remediation or mitigation measures are put in by the risk owners to lower the risk from the observed “inherent risk level” to the “planned risk level”.

  6. Sometimes the response treatment or mitigation normally takes some time to implement or become effective and periodic assessments during the interim usually can be shown as a “residual risk level” which is nothing but the difference between the current assessed risk level and the planned risk level.

  7. Typically risk prioritization is shown visually through “heat maps” that buckets the various risks into critical / high, medium and low impacts on one axis and the probability of occurrence on the other. The third dimension – time or the speed of onset of the risk – can throw up very useful insights for actionable decisions to avert the risk event.

More on risk assessments and response treatments in my next article.

High level overview of IT risks

This is a huge and on going topic – fundamentally because of the rapid innovations that are happening in the technology space. The word “information technology” as we understand today (to name a few) encompasses hardware resources, networks, operating systems, virtualization, software engineering, business applications, artificial intelligence (AI), robotics, cloud computing, etc.

New and innovative technologies are coming up at such pace that within the next few years, we would be seeing an enormous transformation of the way things work. Take for example, the “Internet of Things” or “IoT” as it is called – a device, equipment, car or building is connected to or embedded with software, sensors or network connections and the main purpose is to collect and share data across the web for different purposes. The IoT has a whole range of amazing benefits that would make businesses as well as end users smarter, but it also can also bring in major potential drawbacks and disasters.

In 2014, two cyber security researchers demonstrated how they could crack the IoT vulnerability in the brake and transmission system of a Jeep Cherokee and disable the same remotely. The two researchers used a hacker’s technique that gives wireless control to the attacker, via the Internet, that could extend to thousands of vehicles. The software codes used by them could send commands through the Jeep’s entertainment system, to the vehicle dashboard functions, steering, brakes, transmission, etc. – a nightmare indeed when you think of how you or your family’s safety will be vulnerable while driving down a highway.

A high level overview of areas that need attention to risk monitoring in the arena of Information technology adopted by many organizations would cover aspects such as what kind of risk, how is it perpetrated and where does the risk lie.

1. Information (or data) security risk – Why am I putting this on top of the list? Because many organizations tend to think that information theft or data leakage happens due to attacks on the network or infrastructure by external hackers. Therefore traditionally, security models started looking at how to strengthen the network, firewalls, routers, etc. The solutions such as network sniffers, malware protection, anti-virus software, SIEM systems have been prioritized for ensuring security. But, the risk environment is changing and it has been found that data breaches and information leaks have been caused more by insiders through various methods, ingenious ways of perpetrating the defences and constant changes in technique that lasts over a long time. It has become extremely difficult to spot them on time to prevent it because of innovative, persistent and sophisticated efforts by cyber criminals.

External parties

External entities hacking into corporate systems or websites may have many motives – some may be for information theft, financial crime, a disgruntled employee who has been fired wants to disrupt the business or even a teenager wiz kid testing hacking methods for fun.

The methods may range from phishing mails to embedded macros in excel files or exe files in portable document files (pdf) that triggers a payload, or other “social engineering”. Many a times the employees or insiders email ids are compromised through social networking sites. Hacking techniques can often be done over a period of time by enticing users with quid pro quo (say, for example, offer of freebies or other gifts) and then leading them on to click on some files or links that lets the hacker have full control over the compromised device / desktop / laptop or server.

Deeper reconnaissance of the targets to be hacked (such as what happened with Snapchat CEO’s mail), involves a lot of time and effort that hackers take to study the habits, patterns of emails that the user sends and then the knowledge or information extracted is used for malicious purposes. In the Snapchat episode, a phishing attack tricked an HR employee into handing over payroll information about “some current and former employees”.

It seems Snapchat fell prey to an embarrassingly common type of phishing email, which purports to come from the head of the company itself. In this case, an email supposedly from the chief executive officer, was sent to the HR staffer, who responded with the information requested. It’s easy to see why: who wants to keep the chief executive waiting when they ask for information?

Social engineering techniques can take many forms – such as obtaining the confidence and trust of a person and ultimately gaining access to sensitive information such as email id or passwords, social security numbers, credit card information, etc. or simply overhearing conversations in public places that discloses confidential information about let us say a company’s sales closure status or pricing information and this could fall into the hands of competitors.

Internal employees or others having legitimate access to data

It has been noticed in many studies that insiders contribute to a greater percentage of frauds that result in either asset misappropriation, financial losses or corruption and bribes. Insiders are in an eminent position having access to the most important and crucial information of the company and this could be compromised in many ways such as collusion with 3rd parties such as vendors, related party transactions, bribes from competition, fudging expense reimbursements, padding payroll checks and so on. They are aided by poor internal controls in processes and possibly even poorer oversight by supervisors and management.

The larger the size of the organization, the longer it takes for the fraud to be unearthed and the perpetrator may have even fled the organization.

2. The way business applications are accessed– There is a dramatic shift in the way companies have chosen to do business transactions.

Multiple devices: With the widespread use of laptops, tablets and mobile phones, companies no longer tie employees to their desktops but seek to gain more productivity even when they are on the go or outside the corporate network. Corporate policies encourage bring-your-own-device (BYOD) or work from home, that makes the topic of security extendible to all devices. It also becomes very necessary to understand what critical assets of the organization are at a risk of being compromised and understand whether information is living internally or externally.

Cloud applications: Cloud computing is evolving at a fast pace and companies are looking to increase the cost effectiveness of their IT initiatives, since they no longer need to build complex and costly software on premise and incur huge maintenance of the same. While it gives a variety of choices to the organization concerned, like all other technology changes, cloud computing has its own benefits and disadvantages. There are issues around legal and regulatory risks, privacy issues, confidentiality, integrity and accessibility of information, etc. that needs to be looked into.

On premise applications: Organizations opting for on-premise software have to manage risks involved in the program / software implementation and change management challenges. These are typically to be evaluated on an ongoing basis for risks related to each individual software development project to manage cost versus budgets and time frames. With emerging technologies it may make it very costly to bring in changes in the implementation approach, thus making it even unviable before the software is put into productive use.

3. IT Asset Management risks

CIOs and CTOs are charged with the ever changing IT landscape and the upgrades and enhancements to keep pace with technological changes in the hardware, servers, network requirements, communication systems, avoiding excess stocking of equipment, etc. Software licence compliance must be constantly monitored to avoid potential risks related to licence violations or vendor contractual compliances.

Best business practices demand that organizations have a systematic approach and policies for purchase and maintenance of IT resources.

Identity and Access authorization risks

I have already discussed in detail (in another article “Foundation of internal controls”) the need for ensuring minimal or no-risks in access to applications through effective segregation of duties. It is one of the foundations of proper internal control design and sets the tone for many other controls. Proper user life cycle management encompasses a whole host of controls – starting from providing a unique identity for access across applications, databases, networks, operating systems, etc., handling segregation of duties risks. Technologies available today help organizations struggling with a manual approach through automated processes for user provisioning and de-provisioning while on-boarding and off-boarding them during the hire-to-retire cycle.

5. Non-compliance risks – Data privacy regulations

This century has witnessed a boom in information and data is crunched by what is now known as “Big Data Analytics”. Much of the information consists of personal details of individuals their demographic information, the products they buy or prefer to buy, the locations they have visited and of course the data provided by their use of “smart” devices- these could be ranging from smart phones, smart cards, or other things connected to the internet.

Data privacy requirements and compliance has become mandatory in several countries. The most recent one is the comprehensive and stringent EU General Data Protection Regulation, 2016 (GDPR) law governing the member European Union countries that puts in place several requirements governing data privacy and protection of information. This applies not only to organizations within the European Union (EU) but to any business or organization outside the EU too, that has access to private information of EU citizens.

A variation of this law was earlier adopted in Australia in the form of the Privacy Act and in Canada in the form of Personal Information Protection and Electronic Documents Act, 2000 (PIPEADA). In both these countries this model involves the cooperation of the government and the industry.

On the contrary in the US it is a “liberty protection” and a “right to be let alone” to ensure as little intrusion from the government as possible. Unlike the EU there is no comprehensive principles of collecting and disclosing data in the US. There is limited sector based specific approach that varies between public and private sectors. Broad legislations like the Privacy Act 1974 which is based on FIPPS (collection of data by Government), the Electronic Communications Privacy Act, 1986, the Right to Financial Privacy Act, 1978, along with sector specific regulations like the Gramm-Leach-Billey Act (GLB), the Health Insurance Portability and Accountability Act (HIPAA), the Childrens’ Online Privacy Protection Act (COPPA) exist. In addition, States have their own data protection laws.

In India, drafting a data protection law though started in piecemeal, several legislative attempts were made to secure information privacy in various sectors in India. These include the general data protection rules under the Information Technology Act, 2000 (IT Act), the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016, and sector specific regulations for telecom, banking and financial services through various Acts and rules.

With a whole host of laws and regulations that businesses need to comply, it is needless to say that non-compliance is a major risk factor to be reckoned with.

6. Data storage, archival, retrieval and disaster recovery plans

Last but not the least, risks have to be assessed and mitigated for proper data storage, periodic archival and retrieval as required by business or tax / government authorities. Disaster recovery plans are a must for ensuring business continuity and they form part of necessary business controls to be enforced by the CIO / CTO.