Sunday, November 6, 2022

Chaos Engineering Clarity


My recent conversations with technical community made me feel that Chaos Engineering(CE) concepts are being misunderstood. This being emerging practice in the industry, very soon deeper understanding would eventually clear all the confusions. There are comprehensive documentation written about this practice already defining its objectives, principles, and implementation. These documentation are widely available through various open source and commercial CE tools available in market. 

Technical folks new to this practice are having doubts and confusions on this practice. I believe further clarities need to be provided on the CE practice widely. I have gathered many questions through my interaction with technical teams. I'm trying to address some of them in this blog post.

Before I begin, you should read through the chaos definition. As per wikipedia:

Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system's capability to withstand turbulent conditions in production


Is chaos engineering practice against compliance and regulation? 

No! In fact, the CE experiments help in meeting security and compliance standards by uncovering the hidden issues. Compliances are setup generally for ensuring data privacy, data protection, security, country/regions specific laws and following standard processes. 
The objective of chaos engineering is to have deeper understanding of IT systems and eliminate or minimize the application/service outage. The tests by injecting faults are supposed to be done in a controlled manner where rollback plan is also part of it. 
As far as the tools usage, just like any third party tools that are used in the applications, CE tools also required to be complaint with security and licensing standards of an organization. The scans that enterprises mandate usually checks for all the CVEs in third party tools.


Is chaos engineering replacement for security related tests like static and dynamic/penetration test practice?

The general CE practice is not meant to replace the security tests like static, dynamic or interactive application security tests (SAST, DAST or IAST). Both security testing and CE practice includes injecting fault into the system. The method of analysing the system by trying to break is same in both types of practices but objectives differ. 

I see attempt being made to extend CE practice to cover certain types of security testing. There is even a new name Security Chaos Engineering coined for this purpose.  They use the same fault injection approaches to inject security flaws into the system. However, one should not confuse general CE practice with security testing. Their objectives are different even if the same tool is used for both types of practices. 


Is chaos engineering suitable only for cloud infrastructure?

Any type of infrastructure that hosts your applications can be considered. Whether it is cloud, containerized, virtual machines or physical servers infrastructure. The widely available examples in the documentation of CE provides more examples on  cloud infrastructure. That doesn't mean it is only applicable for cloud infrastructure. 


Can it be made part of automation through CI pipelines?

Automation of injecting faults at various layers like application, network, and computing resources is quite possible. However, CE process require observability through instrumentation which is better if carried out manually together with all stakeholders. Automating the systems breaking process could fail the objective of deeper understanding on the system and its behaviour when faults are deliberately induced. 

The crucial phase of CE practice is to have a game plan day (mock drill) where faults are injected and systems behavior is captured. This would help the teams to be prepared when such real time situation arises in future. 


How is chaos engineering different from load testing or performance testing?

Load testing objective is to determine the system performance under load and benchmark it. This helps teams in understanding the system capacity and design the system accordingly. The scalability requirement of the system is identified and fulfilled with the help of performance testing. 

This is entirely different from the objective and approaches of CE practice. However, CE practice can include certain load testing tools to further test the behavior of the system under the load while fault is injected. This apart, there is no other connection between CE practice and performance testing. They remain different in its objectives and approaches. 


Is it necessary to use chaos engineering tool?

Tools help you to get started quickly on achieving the CE objectives. There are various open source and commercial tools available in the market. These tools provide make readymade fault injections available to you with easy setup. Most of the tools comes packaged with resource exhaustion tests (RAM, Disk, I/O), network test(Latency), Infrastructural(pods, containers, VM, server) or application level tests. 

Writing these tests from scratch requires lot of time and effort. I would recommend to use one of such tools if you are new to CE practice. Once you are proficient, you can start building your own tool. 


Hope this post helps you. Thanks for reading and look forward to your feedback.


References

http://principlesofchaos.org/

https://arxiv.org/abs/2006.04444

https://www.ibm.com/cloud/architecture/architecture/practices/chaos-engineering-principles/


     

 



Friday, September 23, 2022

Software Architecture in DevOps Age


I started my architecture journey designing and solutioning a monolithic system. It was the era of layering software architecture where system was separated as data layer, business layer, application layer, and presentation layer. Lengthy design phases for software architecture was a norm back then. I as an Architect, mainly involved in collecting high-level requirements, creating governance model, reviewing enterprise standards, documenting, communicating through architecture diagram, identifying design patterns, and designing components to use in the software development process. Software development only started after all that elaborative design process. There was a strong belief that architecture and design must completely end to start the implementation process. The high coupling resulted by this model created dependencies between teams and eventually slowed down the deliveries.

During 2013-14, when our team introduced to Agile, we had concerns and questions. As an architect my apprehension was to know if traditional architecture fit in agile space. The concerns I had were:




  • How to refactor if latter sprints changes architecture?
  • Will short term plan introduce major structural changes in future? How to handle them?
  • How can architect effectively communicate with small but more number of Agile teams?
  • How do we get long term client requirement road map visibility?

Service Oriented Architecture
As our product was much stable and there was not many architectural changes till the end of 2014, newly introduced Agile teams had not many challenges like architecture refactoring. This period of transition helped me to work on my fears. During this time, I got introduced to domain driven design which helped me decomposing the layers system further into services. This was my first level discovery to work on multiple functional aspects in parallel which would fall into different logical categories. Helped me in shifting from large programs toward multiple autonomous teams. 

DevOps Journey
The year 2017 got an opportunity to architect a large scale solution from scratch. I started this work with group of Architects, and Engineers who were ready adapting to Devops model. 
The architecture process I explored with DevOps model are: 
1. Priorities : Planning scope was primarily focused on the our high priorities based on assessed business capabilities. This helped us to start with simple solution and refining that incrementally and iteratively. The essence was to do enough architecture to get through the next sprint. The architecture design is iterative based on comments from the planning ecosystem of organization and also based on new information and changes that may occur in the organization’s environment while planning and architecture are occurring.
2. Automation Everywhere : Right from infrastructure provisioning till running testcases helped removing error prone manual efforts 
3.  User Oriented Design : By focusing on the user journey, it helped collecting the non functional requirements. 
4. Collaboration : As an Architect, I was part of every squad owning their delivery commitments. 
5. Architecture Review Board : The board comprising of Architects, and Product Owners as committee members, did review every new feature and epic to provide 360 degree feedback and perform impact analysis. 


Paradigm Shift in Design Process 

Thinking from old styled complete up-front design to priority based minimum viable architecture brought various changes in the design process. The challenges in terms of non clarity in roadmap, unknown usage requirement, changing requirements, cost and system’s evolvability introduced risks in the design. This called for a design that provides cushion to all the above challenges. The design strategies helped are: 

  • Separation of concerns: Separating a software system into distinct solutions, such that each section addresses a separate concern
  • Modularization: Decomposing a system into modules driven by information hiding and separation of concerns
  • Loosely coupled interfaces: Interaction of the systems were based on open standards like API to reduce the interdependence between the systems.
  • Event driven: Real time data flow between the loose coupled systems
  • Distributed Systems: Taking full advantage of modern multi core processor technologies, systems are distributed to run concurrently to support horizontal scaling and elasticity to varying workloads.  
  • Non-functional requirements:Considering important aspects of non functional requirements is the key in designing a system for long term with minimal core changes. 
  • Meta-modeling: Modeling the concepts and relationships of a modeling language/notation
  • Augmented Intelligence: Rule engines to lower the cost of changing the behavior of the system



Technology to adapt Devops effectively
It would have been harsh Devops journey without the support of great set of modern technology and tools.  Some of the tools immensely helped me are:
  • Cloud Native Stacks
  • Containerization
  • Test Automation Tool
  • Pipeline Management Tool
  • Code Scanning Tools
  • Deployment Automation Tool

Out of control areas

There are numerous aspects that are not under direct control of Architects. Changing business dynamics, customer interests shift, disrupting technologies etc. can happen anytime and architecture should be able to consume it with minimal refactoring effort. Few areas that I experienced design refactoring are for: 

  • Core feature replacements or new additions
  • Obsolete technologies replacement
  • The sunset of external system that we were depending for data


Summary

Just like everyone in the Devops team work across the entire application lifecycle, from development to deployment, Architects also plays key role in every aspects of software lifecycle in DevOps culture. As an Architect, by managing change and complexity, I ensured the objective of Devops to deliver the software end product quickly and efficiently are successfully met. As I mature with Devops, focus is low on the tools, automation, and orchestration. Instead, it is more about communication, collaboration, and a collective effort to remove bottlenecks.



Cool DevOps industry leaders I follow
@danielbryantuk
@JayneGroll

Sunday, July 31, 2022

The Role of Web Application Firewall(WAF) in Security


“A web application firewall (WAF) is a specific form of application firewall that filters, monitors, and blocks HTTP traffic to and from a web service.” -Wikipedia


According to the PCI DSS Information Supplement for requirement 6.6, a WAF is defined as “a security policy enforcement point positioned between a web application and the client endpoint.



WAF is an application level firewall that is commonly used to protect web applications. It is located in front of web applications to monitor HTTP traffic coming from internet. It is used for detecting and blocking malicious requests in real time. It forms the first line of defence to protect web environment of users or companies. 



Types of WAFs


WAF functionality can be implemented in software or hardware, running in an appliance device, or in a typical server running a common operating system. It may be a stand-alone device or integrated into other network components.


Hardware WAF

This type of WAFs comes as part of hardware appliance which can be deployed in the local network where main web servers would be running. This device comes with its own computing resources and suitable for websites that handles heavy traffic.


Software WAF

This software WAF installed normally in a virtual machine setup and maintained. It is much cheaper and flexible compared to hardware WAF but the throughput could be slower than it. 


SaaS WAF

This type is managed by cloud service provider and there is no maintenance overhead as it it takes care by service provider. Optimising, patching, and managing is done by cloud service provider. The ease of use and lower cost are the advantages of it. 




Core Capabilities of WAF

These are must-to-have features that most of the WAF supports and some commercial ones offer many more advanced features.


Reverse proxy for intercepting the incoming traffic

This is the most crucial feature that every WAF must support. Every incoming request tower server is first intercepted by the WAF which works exactly like reverse proxy.


Rule based logic, Parsing and signatures

Rules or Policies specifies what WAF needs to look out for. They are specific samples in web traffic in the incoming data stream. They also include the blocking action to take on detection of an attack attempt.


Protection against OWASP top 10 security flaws 

At a minimum, WAFs must detect the OWASP listed top 10 attacks. The OWASP Top 10 is a standard awareness document for developers and web application security. It represents a broad consensus about the most critical security risks to web applications.
The OWASP produces a list of the top ten web application security flaws. 


[Picture courtesy:OWASP]

Configurable for covering new attacks

Customizable for detecting new types of attacks. Users should be able to customize the rules with simple configuration. This feature help users to modify the configuration on demand.


Blocklists and Allowlists

The feature supports both positive and negative security model against known attacks


Logs for data analysis

Logs helps users to debug and analyze the data stream




Advanced Features

There are many advanced features being offered by commercial WAFs to add value to their offerings. 


DDOS protection 

Protection against denial of service attacks


UI Console

Intuitive dashboard user interface for viewing stats and other reports. It can used for quick data analysis as well. 


Threat intelligence

AI-based machine learning to detect suspicious activity. Detects the latest hacker attack strategies by identifying hacking patterns.


Failover protection

As WAFs become bottleneck and single point of failure in the whole ecosystem, this feature ensures high availability. By handling failure, it rolls new WAF instance in case of crash. 


High HTTP throughput

Faster assessments of wide variety risks using distributed WAFs help maintaining good throughput. 


Sensitive data protection 

This feature alerts on responses containing sensitive data


Plugin to existing web servers

Certain web servers allow extensions to play along to help users extend the capabilities. WAF as plugin to servers make it uniform and easy to configure. 


Brute force attack prevention 

Protection against brute force is a feature that WAFs use to protect against attacks by automated tools that runs successive attacks to gain control.


Attack analysis

Helping users analyzing attacks adds high value to WAF offerings.  


Continuos upgrades

WAFs must continuously upgrade to tackle the new attack types. Every year, there were thousands of new attacks detected. More than 3000 new vulnerabilities are discovered in 2021 year alone. 



Is WAF Silver Bullet?

WAFs can only detect attacks at HTTP layer and not in other layers. For example, at network layer there should be separate network firewalls and IPS(intrusion prevention systems). 


Inspite of the numerous features, enablers, and detection techniques, there are various tools an techniques used to bypass WAFs today. Some of the known approaches used by hackers to bypass WAFs are browser emulation, obfuscation, encodings, and payload characters modification. As WAFs rules and policies are configured mainly based on regexp, hackers figure innovative ways to bypass it by modifying payloads. There are automated tools used by hackers to speed up the process and tools help them to find out the vulnerable areas inside WAFs.



Conclusion

WAF is not a silver bullet and hackers continuously find new ways to break its protection. One can't relax just by introducing WAF in the infrastructure. The protection process is never ending with everyday hackers finding out new ways to break in. It requires continuous effort to keep updated on the latest security vulnerabilities and upgrading the system for it. 




Sunday, June 26, 2022

Enterprise Application Modernization - Lot changed in last decade!!!

Fast changing business dynamics drives organizations to consider application modernization strategy as paramount importance. Fast pace technological advancement inspires technical community to explore new discoveries. Companies reserve significant budget to refurbish their legacy software that realized through modernizing the platform, refactoring the tech stacks or even purchasing. Depending on industry, target market, application scalability, reliability and other factors, organisations identify impactful areas like market share, revenue, differentiated customer experience, and more.

The popular motives behind app modernization is operational efficiency, cost saving, improving customer experience, reliability, and enhancing security. There are instances where legacy technologies with increasing technical debt also influence to initiate transformation.The best way to identify the technology obsoleteness is keeping watch on growing number of technical debts. The steep increase in that metrics is clear indicator of the need to modernize the application. Financial constraint, workload, delivery pressure, and resistance to change are the topmost challenges companies face to undertake modernization.


During 2001-2010

First decade of this century, industry seen spike in web sites and web application development predominantly using client server technologies. Here is an informative website to explore the timeline of milestones in the history of web design from 1990 to the present. Web site and applications have been in continuous journey of transformation ever since. MVC pattern, Web2.0. Portal, service oriented architecture design and automation have been the biggest drivers of modernization during 2000-2009.

During 2011-2020

Second decade saw rise in cloud technologies and mobile apps. Usecases for smart devices disrupted the market. IT infrastructure moved from owned data centers to cloud. Application deployment transformed from physical servers to virtual servers and ultimately found place in containers. Application architecture transformed itself from monolithic to multi layer and eventually to microservices. Development process focus moved away from water fall to agile and then to DevOps. Container orchestration tools like Kubernetes and Openshift made cloud native development process possible.
 


The heavy weight application servers like Websphere and Weblogic were replaced with lighter and faster servers. With huge data generation, development of distributed systems took limelight. Application integration shifted from SOAP to REST API and distributed publisher subscriber mechanism like Kafka. There was a paradigm shift in software development process with DevOps taking centerstage. Big data and data analytics created churn in the market to drive digital transformation.


What's Next


With rising numbers of internet users, a trend accelerated by the pandemic, various factors have surfaced to influence companies.

  • The increased online transactions and digital payment
  • Remote work required better collaboration with virtual meeting infrastructure
  • Ever increasing cyber security concerns and fraud prevention
  • Hybrid cloud infrastructure
  • Growing online education market
  • Faster AI adoption and AIOps
  • Scalability and reliability with infrastructure


Technological transformation this decade probably going to be driven from likes of AI, Blockchain, gRPC, HTTP2, Data Fabric, Data streaming, 5G, and Quantum technologies. R&D on these niche areas shows hundreds of new successful usecases as well. Many leading technology companies have already adopted, and the trend is showing clear path to others. 


 

Sunday, May 29, 2022

 Lessons to IT teams from Atlassian outage incident








Atlassian IT teams took whopping 14 days to finish recovery from their outage that started on April 4th 2022. I recommend to go through Atlassian outage update page before you proceed to read here. 


Atlassian team communicated in a clear way that the outage caused due to failure in executing one of their legacy application sunset and migration. The interesting root cause analysis shows how communication between teams failed, and a faulty script further aggravated the whole situation. Even with severe incident of permanent data deletion by accident, Atlassian team quickly came together and took the complete control. Must appreciate those teams for avoiding further damage and restoring the customer data with all the constraints they had.


The whole incident teaches us few things that we perhaps know already but procrastinate on keeping them in order. 


Regular data backup with automated testing

    Atlassian team explained in their status report on how they recovered using immutable backup data. Regular testing of backup data is one essential aspect which most of the IT team give least importance. The KPIs of disaster discovery like RPO(recovery point objective) and RTO(recovery time objective) must be tested and measured periodically. 


Segregated Development, Test and Staging environment

    With ever decreasing computing cost, budget wont be a serious constraint for IT teams to have multiple lower environments. Teams must have stable lower environments and deployment process defined. Production deployment without validation certification in lower environments must be discouraged. Staging environment must have the data close to production data with proper masking of sensitive and confidential information. This way, test in staging environments would bring up potential data issue that can happen in production. It is not clear from the Atlassian report whether the development team tested it atleast in one lower environment before deleting the legacy application in production environment. That would have saved them all these hassles.


IT applications uniqueness

    Distinguish applications using unique IDs in enterprise library. Define the dependencies between applications, and maintain the detailed documentation. Remove ambiguity in applications naming. This helps in communicating better. 


Automation 

    We all know how important automation is for business. This incident is a solid proof that it not just saves cost but reputation as well. Atlassian IT team missed on automating backup data recovery process. This lack of automation along with non segregated customer data cost them 14 days to recover fully. 


Auditing

  We often ignore to follow established governance process. Production deployment review and approval process from a team that has broader business vision finds stiff resistance calling it counter productive, bureaucracy etc. Regular audit on security, and non functional requirements like high availability and disaster recovery is critical to business continuity. It must not be deprioritised against regular process. 


Application sunset process

 Application sunset is not a trivial task. Just because it lost focus of customers and business, it doesn't mean that legacy application do not carry data and dependencies. This task must be given due diligence. Dependency checking, data archiving, compliance guidelines, stakeholder communication are some of the sub tasks under this activity which must be executed before deleting anything.


Clear Communication

  There is nothing else as important as this in any aspects of our life. Lot being said and taught about communication. Still we fail at this. 


Mistakes are inevitable. To encounter fear of failure, we must anticipate and must be prepared for facing it. Any system can fail. Designing the system for failure is one of the best solution that industries are focussing on. There are various techniques and you must adopt according to your system design. This will ensure quick recovery and minimal downtime. 

Sunday, April 17, 2022

 India's Personal Data Protection Bill 2021 - Chapter-wise Summary for Techies



The Government of India(GoI) is in the process of framing comprehensive and specific legislation to protect personal data of its citizens. The Joint Parliamentary Committee(JPC) was formed in 2019 to study and constitute Personal Data Protection(PDP) bill for India. After two years, the report was tabled in Indian parliament by the committee with its recommendation to protect personal data of Indian citizens. 


Many countries already framed protection laws to safeguard the privacy of its citizens. Although, PDP is similar to other countries especially with GDPR (The General Data Protection Regulation by European Union), it is critical to understand the nitty-gritty of the bill to remain compliant while doing business in India. Global IT companies complying with various countries privacy laws can extend their implementation with minimal effort to comply with PDP once it is enacted and provides transition period. 


My intention in this blog is to highlight key recommendations of the JPC going through each chapters. I'm keeping it concise to help developers, designers, architects, and product owners to get quick summary of this bill. For more details, one can start referring the appropriate section from the PDP bill report link that I shared in the references section below. I'm referring the section numbers along side the clauses to help readers quickly refer them in the original JPC report. There are fourteen chapters in this bill explaining public policy on data protection and I'm only listing the key aspects that are important for IT professionals.



Chapter 1: Preliminary

This section contains official definitions, meanings, terms and scopes that subsequent chapters would be referencing. It is critical to go through this section without miss to understand the definitions. Important key words are:

Personal data, non personal data, sensitive data classification, data fiduciary(processor), authorities, data profiling and etc. One key highlight is that the provision of this bill applies to both personal and non personal data. Processing of Personal data includes collecting, storing, disclosing, sharing within the territory of India and also to those not present in the territory of India but carrying out business in India. 


Section-15 describes the definition of person as per this report. Having clarity on who all come under the definition of Person is a must. As per the report, Person can be individual, a Hindu undivided family, a company, a firm, an association of persons or a body of individuals, the state, and every artificial judicial person. 

 

Section-41 in this chapter lists what constitute sensitive personal data and it is important to remember while designing applications. The list includes : Financial data, Health data, sex life, Sexual orientation, biometric data, genetic data, transgender status, intersex status, caste/tribe, and religious belief.  


Note the various actors and their roles in this chapter. It has definitions for Data principal, Adjudicating officer, Consent Manager, Data Auditor, Data Fiduciary, Data Protection Officer, and Data Protection Authority of India.



Chapter 2: Obligation of Data Processor


These sections in the chapter states the methods for processor or fiduciary to get consent from data principal before collecting this data. It mandates disclosing of the purpose, extent, nature, categories, and storing period to collect data. 

The highlights in this section are that it enables data processor to share, and transfer the personal data as part of business transaction with below clauses: 

  • Disclose with whom the data will be shared. 
  • Provide contact details of data processor and data protection officer 
  • Right of data principal(person) to withdraw consent 



Chapter 3: Grounds for processing of personal data without consent


State allowed itself to collect and process personal data without consent for provisioning services, security, court order and treatment during medical emergencies. This is critical information for e-governance applications development team to optimize their data privacy design. 


One key highlight here is that it allows storing personal data if it is not sensitive in cases of employment by data processor. HR applications which usually required to store employee data could still continue to do that without employee consent. 

The section mentions other “reasonable purpose” which excludes consent are : prevention or detection of fraud, security, credit scoring, M&A, search engines and publicly available personal data.



Chapter 4: Personal data of children


Child right protection being the objective in this chapter mandates policy for parent/guardian consent. Profiling, tracking, behavioural monitoring, targeted advertising  or any other type of potential harm to the child due to violation of informational privacy is disallowed. Registration with the Data Protection Authority is a must for data fiduciaries collecting children's data.  



Chapter 5: Rights of Data Principal


This chapter talks about the rights of the data principal on his data mandating for Processor to provide information in clear and concise manner. 

It is important to understand how data principal can exercise his rights. Data processor can:

  • Ask Identities of data processor, categories of personal data 
  • Requesting Right to be forgotten
  • Nominate legal heir
  • Request appends to agreement terms
  • Right to correction and erasure 
  • Restrict or discontinue disclosure in case the purpose is no more served(20(1))

On the other hand the act allows data processor to provide justifications in case the request cannot be considered or it is not technically feasible(19 (2)b). It also lists that Data processor can charge fee to data principal for providing the information back to the requestor(21(2)).



Chapter 6: Transparency and accountability measures


Interesting chapter for IT fraternity where they can find more IT level details here. This chapter mandates processor to prepare published “privacy by design” policy to contain:

  •  Business and technical systems design and process
  •  Obligations
  •  Approaches to transparency in data processing   

This sections recommends Processor to have defined strategy for:

  • Encryption and de-identification process
  • Protect integrity of personal data
  • Prevent misuse
  • Notification and alert mechanism when data breach happens. Mandates notification issue within 72 hours of becoming aware of such breach

The bill in this chapter mandates data protection impact assessment 27(1) which should contain Appointment of data protection officer and lists the responsibilities of such role.  Bill expects continuously updated detailed documentation of privacy by design policy published in processor's websites. The documentation should contain:

  • Categories 
  • Purpose
  • Exceptional situations
  • Procedure for exercise of rights by Principal with contact detail and escalation process
  • Info on cross border transfers

It calls for data protection impact assessment 27(1) which should contain:

  • Detailed description of proposed processing operation
  • Assessment of the potential harm that may be caused to the data principal

As per the bill, this gets validated by Data Auditor who assigns a rating in the form of data trust score. 



Chapter 7: Restriction on transfer of personal Data outside India


Sensitive data may be transferred outside India but such data continue to be stored in India (33(1)). This brings huge impact to the IT side of the business where they have to ensure the data centre inside India is setup to store a copy of the data before it is transferred outside the country. 


Another highlight is that central govt approval is required for sharing the sensitive personal data with foreign government or agency (34(1.3)).



Chapter 8: Exemptions


This chapter lists the exemptions from this act when Authority is satisfied that the application is for research, archiving and statistical purpose (38). Allowing sandbox environment for data processing in research and innovation is highlight in this chapter. 


To help startups, exemptions are provided with clauses like turnover of the small entity being low, carried out for a very brief period like just one day in a given year and innovative solutions in AI, ML or any other emerging technologies. Allowing exemption to sandbox environment for innovation would immensely help the research oriented organizations.


Below chapters in the bill provide details on the regulation and enforcement framework mainly. 


Chapter 9: Data protection authority of India


This section manly talks about the GOI intention to setup the authority and provides details on structure, duty,  of such authority. The framework setup information in this chapter is mainly for public service authorities than IT companies.



Chapter 10: Penalties and compensation


Important section for business houses to understand the seriousness of this bill. There are different types of penalties and fines listed for not being compliant with the law in this chapter. 



Chapter 11: Appellate tribunal


This chapter incorporates instruction for Government of India to establish Tribunal to hear out the cases and conflicts arising out of data protection issues. 



Chapter 12: Finance, Account and Audit


This chapter includes data protection authority fund allocation by government. Provides detailed instruction to public policy implementors within government. 



Chapter 13: Offences


This chapter discusses different types of penalties in the context of data protection law that include imprisonment and fines. This chapter is of paramount importance to legal department within data processors to understand the context and spread awareness among responsible executives. 



Chapter 14: Miscellaneous


The last chapter covers miscellaneous activities of authority and procedures to be followed in various scenarios around enactment of the data protection policy.



My View


This act is absolute essential for protecting individual data privacy and supporting digital economy growth. With growing digital products and services in the country, importance of data protection has taken centerstage. I strongly believe that the well implemented data protection act would enforce the citizens fundamental right on their privacy. This act is supposed to build user trust and confidence on the digital business carried out in this land. The bill has good intentions and objectives. Bill addresses most basic features like simple consent forms, data minimization, data corrections, data porting, breach notifications, restricted automated decisions with personal data, and most importantly citizen awareness.  


Some of the clauses in this bill are opposed and committee is reviewing them. I'm hopeful that this law once enacted would reduce misuse of personal data, ensures compliance, and promote data privacy awareness in India.




References:


JPC Report:

http://164.100.47.193/lsscommittee/Joint

%20Committee%20on%20the%20Personal%20Data%20Protection%20Bill,%202019/17_Joint_

Committee_on_the_Personal_Data_Protection_Bill_2019_1.pdf