Compliance teams at large technology companies operate under a level of regulatory scrutiny that most organizations never encounter. FTC settlements, GDPR transferCompliance teams at large technology companies operate under a level of regulatory scrutiny that most organizations never encounter. FTC settlements, GDPR transfer

What Compliance Automation Actually Looks Like Inside a Large Tech Company

9 min read

Compliance teams at large technology companies operate under a level of regulatory scrutiny that most organizations never encounter. FTC settlements, GDPR transfer requirements, CCPA obligations, SOC audits, each one generates its own documentation burden, and the teams responsible for meeting those obligations often do so through manual processes that consume hundreds of hours per audit cycle. Sumit Sharma has spent the last several years building automation systems to replace manual workflows. His work has covered automated control monitoring and evidence generation, third-party risk assessment tooling used by tens of thousands of employees, and security awareness training platforms serving over 650,000 users globally. He has also contributed to the professional knowledge base through ISACA Journal publications, peer review work for the Cloud Security Alliance and IEEE and speaking engagements on third-party risk management and AI governance. We spoke with him about what compliance automation looks like when it’s actually running at scale, where AI fits into risk assessment today, and what most vendors get wrong about how these programs operate inside large companies.

You reduced manual evidence preparation from over 100 hours to minutes through automation. What did that project actually involve day to day, and what broke along the way?

This project involved three components: continuous control monitors, a failure escalation mechanism, and auto evidence generation. The generated evidence was available within a system across each control for auditors to readily consume. This reduced the manual overhead on business and technology teams to provide evidence manually to audits for specific samples during each audit cycle. With such innovative systems, issues can always arise. A couple of examples of what broke along the way is, the monitoring logic was not wrongly configured, or the data source selection is wrong that lead to inaccurate monitoring or evidence generation.

Compliance teams at large organizations tend to resist new tooling. When you rolled out a portal overhaul to 80,000 employees, how did you get people to use it?

The portal that was overhauled was more from the user interface (UI) standpoint. Before we made such changes, we had some internal metrics to begin with, for example, the customer satisfaction score of the tool was lower than the expected baseline. Also, we had been seeing a lot of internal user tickets getting filed, complaining about UI issues, slowness, and difficulty moving between screens within the tool, which seemed to us to make a change to the UI to better guide the users. Hence in a way we heard user feedback and acted on it. Before the roll out of a new portal, we met with few users/teams who used the tool more frequently than others. We also heard feedback from upstream and downstream system users that gave us additional perspective,which helped us to focus on our requirements in the right direction and make improvements on the key UI components. Regarding the adoption, we started making internal communication on what changes we plan to bring with this new UI and when to avoid any surprises. We also invited some users for the user acceptance testing to get their firsthand feedback. Upon roll out, we also had videos uploaded on the portal providing a walkthrough of all new features for the users.

Your ISACA writing covers AI and ML applications in third-party risk assessment. Which use cases are organizations actually deploying today versus talking about deploying?

I have seen organizations automating manual workflows, such as sending reminders, and alsobuilding a risk assessment logic that rates a third party based on certain criteria. Additionally, I think certain organizations are also trying to integrate risk reviews with other reviews within the third-party life cycle to further create a seamless process for internal and external users.

You’ve peer reviewed AI governance work for the Cloud Security Alliance and IEEE. What patterns do you notice in how practitioners are thinking about AI controls right now?

What I have been noticing is that practitioners are approaching it more as a risk-centric view of AI, which means they are looking at it as a new cybersecurity or compliance surface instead of a new innovation. They are trying to push for auditable controls that are mapped across the entire AI lifecycle as opposed to high level ethics statements. Also, there is a strong demand for crossframework alignment (NIST, ISO, EU AI Act) to reduce fragmentation. Overall, having an AI governance must be adopted for a safer and faster adoption in the AI development process, where AU governance is just not a check box exercise but something that can enable trust, innovation and speed. This can also be a key differentiator for the organizations who are either building it or adopting it.

I would answer this question a little differently. Every third party requires sign off from legal, procurement, privacy and security and this is the right industry practice that regulators want to see. Your question seems to be more around how you run a project with so many stakeholders. When you work with multiple crossfunctional stakeholders, a project’s problem statement and the impact that it will have play a key role here. A good to have project will not fly with so many stakeholders. Hence before starting or conceptualizing any project, one must clearly document the problem and the impact. Projects that are required to satisfy a regulatory requirement can have an easy sell because no one wants a company to get fines or a bad reputation due to noncompliance. However, projects which are aimed at performing a proactive risk mitigation can have a lot of push back. Potential reasons for this could be operational overheads on different functions, lack of resources to manage it. To address these concerns, one should identify key metrics for this group of stakeholders so they can easily quantify the impact it will have on their teams. This will help them better prepare for managing such operational constraints and also help you to align on the right timelines for a project go-live. And this way you do not run an about to be failed project, but a project well thought out, where requirements are clearly captured and though it takes time, but you deliver a highly impactful project.

Agentic AI creates risks that existing IT control frameworks weren’t built for. What should organizations be documenting or measuring that most aren’t yet?

As of now we are already seeing or reading about instances where AI agents can access sensitive data and coordinate with other agentic agents. I feel from an AI perspective these risks should be mapped to fundamental principles around internal control and governance. There are these traditional frameworks such as COSO that emphasize segregation of duties, monitoring and risk assessments that ensure reliable operations. However, they do not address novel risks introduced by Agentic AI, such as over-privileged access, inter-agent collusions, and prompt-basedmanipulations. There is a need for a control framework that integrates classical IT general control framework (ITGC) with emerging AI-specific considerations. Organizations must think about measuring the autonomy of such agents, including what they can access or invoke without human intervention. Model drifts will require tracking, and organizations must log different steps and action chains and feedback loops for agents. Also as mentioned before, such frameworksmust align with global regulatory requirements that will further give organizations an opportunity to rationalize their control environment as opposed to creating multiple similar controls to satisfy AI requirements for different country level or regional requirements.

You’ve worked in consulting, banking, and Big Tech. Which environment taught you the most about managing technology risk, and why?

I believe this is no one environment that has taught me the most about managing technology risk. Working in consulting gave me a broader exposure across industries, clients and local and global regulations. Banking taught me why technology risk is so important to manage in a financial institution due to the sheer fact that one systematic issue can have global ramification across the bank and can lead to financial loss, which can directly impact bank’s revenue and on top of itscustomer investments.  Coming into tech with all this experience helped me understand the leadership mindset on how much risk management is important to them and to the business. Unlike consulting or banking, big techs operate at a massive scale, velocity and global regulatory exposure. What I learned is though basic risk management fundamentals apply but they need to move beyond the point in time to more continuous risk monitoring. Also, the blast radius of a failure is immediate and user impacting, requiring risk-based decision making. Risk is important but it should not slow down the business. Also, since the risk dimension is more from managing user data and its impact, some regulatory requirements from other industries such as banks, may not apply. Hence risk management here needs to be tightly coupled with product design, data architecture and automation rather than being a mere policy. What I learned and I am still learning in tech is balancing innovation speed with regulatory obligations, which has definitely sharpened my ability to design projects that scale and are more preventive in nature than being reactive.

What do vendors selling AI compliance tools get wrong about how these programs actually run inside large companies?

I feel vendors selling AI compliance tools kind of underestimate how fragmented and complex large companies are. I feel there is an underlying assumption that there is one centralized governance body, when in practice this responsibility is split across multiple compliance teams with overlapping authority. There are tools built as dashboards and at times ignore that the actual compliance processes are executed through multiple systems, and development pipelines. Also,if the tools do not help in automating manual workflows such as evidence generation, it is difficult for them to scale. Another potential mistake I feel is thinking about compliance as a static checklist as opposed to being a continuous process that should incorporate regulatory updates and perform model changes. Tools are often created as ready for regulator reporting without thinking about the usability for engineers and program managers who will actually workon the tool day in day out. Also, bigger organizations care less about flashy risk scores and are more concerned about traceability, auditability and accountability when a regulator asks, “who approved it and why”.

Market Opportunity
LooksRare Logo
LooksRare Price(LOOKS)
$0.0006849
$0.0006849$0.0006849
-2.86%
USD
LooksRare (LOOKS) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

American Bitcoin’s $5B Nasdaq Debut Puts Trump-Backed Miner in Crypto Spotlight

American Bitcoin’s $5B Nasdaq Debut Puts Trump-Backed Miner in Crypto Spotlight

The post American Bitcoin’s $5B Nasdaq Debut Puts Trump-Backed Miner in Crypto Spotlight appeared on BitcoinEthereumNews.com. Key Takeaways: American Bitcoin (ABTC) surged nearly 85% on its Nasdaq debut, briefly reaching a $5B valuation. The Trump family, alongside Hut 8 Mining, controls 98% of the newly merged crypto-mining entity. Eric Trump called Bitcoin “modern-day gold,” predicting it could reach $1 million per coin. American Bitcoin, a fast-rising crypto mining firm with strong political and institutional backing, has officially entered Wall Street. After merging with Gryphon Digital Mining, the company made its Nasdaq debut under the ticker ABTC, instantly drawing global attention to both its stock performance and its bold vision for Bitcoin’s future. Read More: Trump-Backed Crypto Firm Eyes Asia for Bold Bitcoin Expansion Nasdaq Debut: An Explosive First Day ABTC’s first day of trading proved as dramatic as expected. Shares surged almost 85% at the open, touching a peak of $14 before settling at lower levels by the close. That initial spike valued the company around $5 billion, positioning it as one of 2025’s most-watched listings. At the last session, ABTC has been trading at $7.28 per share, which is a small positive 2.97% per day. Although the price has decelerated since opening highs, analysts note that the company has been off to a strong start and early investor activity is a hard-to-find feat in a newly-launched crypto mining business. According to market watchers, the listing comes at a time of new momentum in the digital asset markets. With Bitcoin trading above $110,000 this quarter, American Bitcoin’s entry comes at a time when both institutional investors and retail traders are showing heightened interest in exposure to Bitcoin-linked equities. Ownership Structure: Trump Family and Hut 8 at the Helm Its management and ownership set up has increased the visibility of the company. The Trump family and the Canadian mining giant Hut 8 Mining jointly own 98 percent…
Share
BitcoinEthereumNews2025/09/18 01:33
UBS CEO Targets Direct Crypto Access With “Fast Follower” Tokenization Strategy

UBS CEO Targets Direct Crypto Access With “Fast Follower” Tokenization Strategy

The tension in UBS’s latest strategy update is not between profit and innovation, but between speed and control. On February 4, 2026, as the bank reported a record
Share
Ethnews2026/02/05 04:56
BlackRock boosts AI and US equity exposure in $185 billion models

BlackRock boosts AI and US equity exposure in $185 billion models

The post BlackRock boosts AI and US equity exposure in $185 billion models appeared on BitcoinEthereumNews.com. BlackRock is steering $185 billion worth of model portfolios deeper into US stocks and artificial intelligence. The decision came this week as the asset manager adjusted its entire model suite, increasing its equity allocation and dumping exposure to international developed markets. The firm now sits 2% overweight on stocks, after money moved between several of its biggest exchange-traded funds. This wasn’t a slow shuffle. Billions flowed across multiple ETFs on Tuesday as BlackRock executed the realignment. The iShares S&P 100 ETF (OEF) alone brought in $3.4 billion, the largest single-day haul in its history. The iShares Core S&P 500 ETF (IVV) collected $2.3 billion, while the iShares US Equity Factor Rotation Active ETF (DYNF) added nearly $2 billion. The rebalancing triggered swift inflows and outflows that realigned investor exposure on the back of performance data and macroeconomic outlooks. BlackRock raises equities on strong US earnings The model updates come as BlackRock backs the rally in American stocks, fueled by strong earnings and optimism around rate cuts. In an investment letter obtained by Bloomberg, the firm said US companies have delivered 11% earnings growth since the third quarter of 2024. Meanwhile, earnings across other developed markets barely touched 2%. That gap helped push the decision to drop international holdings in favor of American ones. Michael Gates, lead portfolio manager for BlackRock’s Target Allocation ETF model portfolio suite, said the US market is the only one showing consistency in sales growth, profit delivery, and revisions in analyst forecasts. “The US equity market continues to stand alone in terms of earnings delivery, sales growth and sustainable trends in analyst estimates and revisions,” Michael wrote. He added that non-US developed markets lagged far behind, especially when it came to sales. This week’s changes reflect that position. The move was made ahead of the Federal…
Share
BitcoinEthereumNews2025/09/18 01:44