Latest News & Insights

Athentic Consulting’s team of experienced experts bring you the
latest news and insights in law and regulations.

AI Transformation Lessons

In late October, I had the opportunity to attend the "Harnessing AI for Breakthrough Innovation and Strategic Impact" course at Stanford University. This course was developed through the collaboration of several faculties within the university, including the Stanford Institute for Human-Centered Artificial Intelligence (HAI). In this article, I would like to summarize the key points from the training to share with you as follows:

AI involves many disciplines, not just technology

Although Stanford is a university focused on Innovation and Engineering, this course brought together faculty and experts from various fields to design an AI drive that benefits both the business sector and society. Almost every professor and expert emphasized that driving AI cannot rely solely on technological advancements. It requires cooperation from all parties, whether they are technology experts, data experts, legal experts, process experts, or every employee in the organization. This means that driving AI necessarily involves driving organizational culture as well, making Change Management—in terms of both work processes and people—something that must be done in parallel.

AI Transformation: Problem first, Data later, then AI solution

Step 1: Know what you want AI to do (Objective)
Focus on the problem. We must start by looking at our problem first, not immediately thinking about technology or solutions. The goal must be clear. For example, if we use AI in the Machine Learning aspect to help make predictions, we must determine what important things AI should predict for our business. Then, we move to Prioritization to rank what AI should predict first and what comes later.

Step 2: Know what Data to use to teach AI
Our own data is the most valuable data. We must be able to convert that data into Machine learnable data. The next question is: what kind of data should we rush to collect?

  • Select data or datasets sufficient to answer questions or impact our goals. Collect relevant data. In terms of quantity, it doesn't need to be massive, but enough to be used for forecasting or prediction. However, we cannot and should not collect every piece of data in the organization to use for AI because it is very wasteful.
  • The higher the quality of the data, the better. The data used must reliably represent reality. If the data quality is poor, no matter how good the AI Model is, it can make prediction errors. But to know if the data is of sufficient quality, we must decide how to measure the data (e.g., measuring customer satisfaction with a score of 1-5 versus just "satisfied/dissatisfied") because different measurement methods may yield different decision results.

Data in your organization is your most valuable asset. Start maintaining and managing it today, even if you are not sure what kind of AI use case you will do, how you will do it, or how much data you will use. Having a large amount of quality data will create an advantage in AI transformation. And if we understand our organization's data well, the various AI agents created will generate value for us.

Step 3: Interpret the results correctly (Judge)
In the AI process, another important task is analyzing and interpreting what AI predicts. For most people in the organization who are not "AI humans," this point is your strength in the AI transformation process because analyzing or interpreting forecast results requires business understanding to lead to our further business decisions.


Driving AI Strategically

In this training, the course invited several professors from the Business School to provide perspectives on driving AI. This reflects that driving AI successfully involves more than just Technology. Prof. Burgelman stated that strategy is a state of mind or concept that we want to have in order to control or determine our own goals. For an organization to be stable and sustainable, it must have Strategic Leadership, which consists of two main parts: the first is the thought that drives action to succeed in cooperation and competition; the second is the organization's ability to control the organization's purpose.

A good strategy shouldn't just be about success; people in the organization must participate and be happy too

Having a leadership strategy that focuses only on success is not enough to make a company or society as a whole sustainable, because success is not always happiness. Fundamentally, humans desire happiness on an individual, organizational, or societal level. From this point, organizational leaders or national leaders need to give importance to governance in driving rapidly changing technologies like AI.

Culture eats strategy for breakfast, and eats Technology for lunch and dinner

regarding driving an organization, no matter how good the strategy is, if the practical guidelines or organizational culture do not facilitate or support it, it is difficult to see that strategy succeed. As the saying goes, "culture can eat strategy for breakfast and can eat technology for lunch and dinner." Ultimately, organizational culture is the organization's way of practice—how it treats people in the organization and how people in the organization treat each other. It also includes work processes and how people in the organization operate. When new technology comes in, how will that process be adjusted? How to make it less siloed? Because there is probably no organization that can introduce new technology and have everyone accept it immediately. But having a strategy will help people in the organization do things they don't want to do with alertness and excitement. This doesn't just mean excitement about new working methods, but also excitement about the benefits that will occur to the organization and its people when following the laid-out strategy. For this reason, we must create indicators and fair evaluations that ensure people in the organization who are willing to tire themselves to adjust work processes truly benefit from it, whether in the form of better compensation or more efficient work with less fatigue. The 3 main points of organizational strategy to think about when doing AI Transformation are:

  • Organizational Readiness: Start with whether the problems we have in business or policy can really be solved with AI. How much data readiness is there? Setting the problem for AI to solve is crucial because if the organization is not ready enough, it may lead to setting a problem that is too difficult, which even AI cannot solve. Or, if the problem is too easy, it might not require AI to solve; using other tools might save more money and time.
  • Strategic Perspective: You must plan for AI to truly help solve problems or truly add value to the organization (Perceived value: PV) compared to the total cost of driving it (Delivered cost). AI transformation is not a business strategy or business plan, but just a part of the plan. We still need to differentiate how the AI we use solves problems for the organization or how it creates differentiation. Is it enough to make us catch up with or lead competitors? For example, regarding the Machine Learning we will use: if used with our organizational data, can it Predict accurately enough to help us understand our customers better, or does it just produce a prediction?
  • Are the skills of people in the organization sufficient? Since driving AI requires diverse skills, a clear human resource strategy is necessary. AI work does not use only technologists but also includes other personnel in the organization who have business understanding and are ready to explain various issues or problems to technologists to clarify the problem and bring AI in to solve it. Even for technologists themselves, AI work requires technologists in various sub-fields, whether Data science (algorithms), Data engineering (data cleansing), Software engineering (convert algorithms to code), Infrastructure engineering (super computers), or User interface design (dashboards). You may not need to hire people in these fields to work in-house but can outsource services, but that still means expensive costs.

In the past, the mindset framework related to organizational strategy was Wisdom > Knowledge > Information > Data. But the arrival of AI has changed this perspective to Big data/information x Computation = Power. What we need to realize is that we still need Wisdom to use that Power.

Don't miss the boat, but you don't need to follow the latest fashion. Focus on value and Return on Investment.

Measuring the return on AI (AI ROI) can be divided into steps as follows:

Step 1: Define goals clearly
How important is this to your business? Is it a "Must-have" (keeping the business competitive) or does it create an advantage (using data or company strengths to outperform competitors)? Is it a new business model? Specify the goal clearly and how to measure it.

Step 2: Calculate all costs
Cost of the AI model, labor costs, and labor savings, IT costs. Look for opportunities to share costs. Don't waste money on small, one-off cases. Identify the benefits, such as increased revenue or improved productivity. Clearly specify which tasks change from human to computer and how it affects expenses.

Step 3: Hidden costs
Because there are no clear examples yet, cost estimates for data, system integration, and security are often underestimated. Resistance from organizational culture or fear of losing jobs. Retraining costs. Reduce risk by planning to stop or change projects quickly.

Step 4: Overlooked benefits
Learning! This project might make other future projects easier or cheaper.

If you don't have massive investment capital, try to avoid building foundation models yourself.

You can build upon existing foundation models significantly because building one requires massive amounts of money and resources. For example, GPT-3 has 96 layers, 175 billion parameters, and 570 GB of data for model training. GPT-4 was developed using multimodal language techniques, but data details were not disclosed (OpenAI claims this is for AI safety and competitive advantage). Even among tech giants, there is fierce competition in developing Large Language Models. Currently, there are many LLMs, and they will continue to increase. The issue is that these companies are racing using similar data to train models, resulting in similar answers from models and facing the same types of problems. This trend may continue until we have a major shift from a new type of model that solves problems in a different way.

Machine learning is an algorithm that learns and adapts from receiving data without explicit programming.

The highlight of machine learning is prediction by taking existing data and converting it into things we didn't know before. In this process, humans must make decisions based on the forecast. The AI-driven approach needs to view management challenges as prediction problems.

AI models are not always perfect; there is always room for development. Models may hallucinate or show symptoms of sycophancy (pleasing the user). But in the future, models will be improved to better meet objectives. However, the larger the model size, the more data and energy it requires.

Driving AI with Responsibility (Responsible AI)

AI is like other technologies that have both pros and cons, depending on how we choose to use it. But for technology use to truly benefit the majority, we need to establish rules, regulations, and etiquette for developing and using AI to ensure it is Robust, Fair, Simple, Transparent, and Responsible.

AI Regulation

Regulating AI technology has key considerations: How to regulate, What should be regulated, When to start regulating, and Who is responsible for regulation.

  • How to regulate: Regulate with strict rules (e.g., bans, design standards, fines) or soft rules (e.g., data disclosure, registration, impact assessment).
  • What should be regulated: Regulate only AI using Machine Learning, or regulate AI using general logic, or even Excel usage? Regulate by computing level? Regulate only "High Risk" systems or only certain sectors (e.g., public health)?
  • When to start regulating: Regulate from the system design and data management stage (upstream) or regulate when humans are involved in decision-making (downstream).
  • Who is responsible for regulation: Government agencies like the Attorney General, specialized organizations ("FDA for AI"), allow private lawsuits, or have auditors be responsible?

Examples of laws related to AI

  • EU AI Act: To regulate all types of AI systems, with 2 main parts: Enforcement and Risk Categorization. It divides AI systems into 4 risk levels and has control measures that vary in strictness according to the 4 risk levels. I have written the details in this column of Bangkok Biz News before, so I won't repeat it here.
  • US Artificial Intelligence Accountability Act: This draft law has not been enforced yet; it is being proposed in Congress. This draft law requires the US National Telecommunications and Information Administration (NTIA) to study and propose guidelines for AI regulation, such as audits, assessments, and certifications, to ensure AI systems are reliable and reduce risks (e.g., cybersecurity), including studying ways to provide AI-related information to the public, businesses, and agencies.
  • Transparency in Frontier Artificial Intelligence Act (California): Guidelines for regulating AI in California, passed as state law in September 2025, covering the following main issues:
    • Transparency: Requires developers of large-scale AI to publish safety standard frameworks and best practices on their websites so the public is aware of safety control processes.
    • Incident Reporting: Requires reporting significant safety incidents to the California Office of Emergency Services.
    • Whistleblower Protection: Prohibits companies from punishing or bullying employees who disclose information about dangers to public health or safety from AI use.
    • CalCompute: Still includes requirements to establish a consortium to develop public cloud computing infrastructure.
    • Law Enforcement: This law will be enforced through civil penalties by the California Attorney General, rather than opening the way for direct public lawsuits.

Future Challenges: In the future, each industry may need to issue its own specific AI care standards because AI will be used more specifically (from previously general use). Specific regulation for each field will become more important.

Actually, there are still many good concepts and insights from having the opportunity to attend this course. I will gradually write more for you to read. In this article, I would like to conclude by summarizing: if we want to bring AI into our organization, what steps should we take?

  1. Start with the problem: Don't start with technology. Find the point where there is a real problem first, then see if we have enough data or are ready.
  2. Do a Proof of Concept (POC) or sandbox: Experiment in a small scope first.
  3. Brainstorming for AI use: Use a bottom-up approach to make the POC more relevant, and use a top-down approach for Value Creation.
  4. In each use case:
    • Identify workflow steps to see the overview of the entire process.
    • Check if each workflow step links with other departments.
    • Identify workflow steps that AI can help with.
    • Check if other departments will benefit from bringing in AI to avoid AI Silos and redundant investment.
    • Identify risks and factors that may cause bias, including risk management.
Dr. Kampon Adireksombat
CEO & Chief Data Strategy and Transformation Officer
About ATHENTIC News & Insights Our Services Contact us Career