Épisodes

  • Why Is TSMC About to Spend Up to $56B in 2026 and Who Gets Paid Next in the AI Gold Rush?
    Jan 15 2026

    Inside Taiwan, Jan 15, 2026. TSMC just reset the AI hardware spending curve with a $52B to $56B 2026 capex plan and a record Q4 profit jump. The ripple effect hit ASML, HBM suppliers, trade policy, and even national power grids. This episode connects the money, the bottlenecks, and the geopolitical moves behind the AI buildout.

    Q: Why did TSMC raise 2026 capex to $52B to $56B, and why should investors care?
    A: It is a demand signal, not a vanity project. TSMC reported Q4 2025 profit up 35% and guided robust growth, then lifted 2026 capex well above what analysts were modeling (around $46B). In plain terms, TSMC is locking in capacity for an AI-driven multi-year build cycle.

    Q: Why did ASML jump above a $500B market cap on TSMC news?
    A: Because TSMC capex is equipment demand. Reuters linked the rally directly to TSMC’s raised spending plan, which implies a materially larger wallet for lithography and adjacent tools. If TSMC expands the kitchens, ASML sells more of the ovens that only it can supply at the leading edge.

    Q: Why does a targeted 25% U.S. tariff on specific high-end AI chips matter if exemptions exist?
    A: It is a policy signal designed to steer supply chains without stopping the current AI buildout. Reuters reported a 25% tariff on specific chips such as Nvidia’s H200 and AMD’s MI325X, with carve-outs that exclude chips used in U.S. data centers and startups, among other uses. It is a reminder that AI infrastructure is now treated as national strategy, not just enterprise IT.

    Q: Why is high-bandwidth memory becoming the “silent bottleneck,” and what is the hard data?
    A: Capacity, pricing, and contract structure are changing. Reuters reported SK Hynix is pulling forward fab timelines, customers are shifting toward multi-year supply agreements, and some memory chip prices rose over 300% year over year in Q4. That is not a normal memory cycle behavior. It is AI infrastructure pulling the whole stack forward.

    Q: Why does China’s $574B power grid overhaul belong in an AI supply chain episode?
    A: Because compute runs on electricity, and grid constraints become an AI constraint. Reuters reported State Grid plans 4 trillion yuan ($574B) of investment in 2026–2030 to move more power across regions and expand transmission. This is the energy foundation behind data centers, electrified industry, and national AI scaling.

    Q: Why are “data rights” and “AI applications” suddenly priced like infrastructure?
    A: Two monetization proofs landed the same day. Reuters reported Wikimedia signed AI content training deals with Microsoft, Meta, Amazon and others via its enterprise access product, reframing “free scraping” into paid licensing. Reuters also reported AI video startup Higgsfield raised $80M at a $1.3B valuation, showing capital is flowing hard into application-layer winners, not just chipmakers.

    Afficher plus Afficher moins
    11 min
  • Why Is the AI Chip War Turning Into a Multi-Billion-Dollar Supply Shock, and a Power Bill Backlash?
    Jan 14 2026

    Why Is the AI Chip War Turning Into a Multi-Billion-Dollar Supply Shock, and a Power Bill Backlash?

    Inside Taiwan tracks how the AI boom is reshaping the world’s most valuable supply chain. This episode follows Nvidia’s H200 whiplash in China, the energy bottlenecks behind data centers, Taiwan’s CoWoS packaging expansion, and the next consumer AI interface wave from smart glasses to travel agents.

    Q1. Why would China restrict Nvidia’s H200 imports when Chinese buyers reportedly ordered more than 2 million chips?
    It signals policy leverage and industrial strategy. Customs guidance that H200s are “not permitted” effectively freezes supply, nudging demand toward domestic alternatives while keeping room for selective exemptions, such as research use.

    Q2. Why does the H200 reversal matter financially, not just politically, for the AI supply chain?
    The numbers are market moving. At roughly $27,000 per H200 and reported orders above 2 million units, the implied demand value is about $54 billion, before services and networking attach. A sudden import stop turns revenue into inventory risk and reshuffles downstream procurement plans.

    Q3. Why is AI becoming an electricity and infrastructure story, not only a compute story?
    Data centers are now an economy-scale load. One industry report citing the IEA’s World Energy Outlook 2025 says global investment in data centers will overtake crude oil supply investment for the first time in 2025. In the U.S., Microsoft cites IEA estimates that data center electricity demand could more than triple by 2035.

    Q4. Why is “community-first” infrastructure suddenly a competitive advantage for Big Tech?
    Because public tolerance is becoming a binding constraint. Microsoft’s stated commitment is to “pay our way” so data centers do not increase residential electricity prices. Separately, rising grid costs tied to data-center-driven demand are already a visible political and household issue across major U.S. grid regions.

    Q5. Why does Taiwan’s advanced packaging expansion remain a key “picks and shovels” signal in the AI cycle?
    Because the bottleneck is not only wafers, it is packaging capacity for AI accelerators. SPIL, an ASE subsidiary, bought factory buildings and equipment for about NT$2.8 billion (US$88.44 million), with industry expectations tied to advanced IC packaging expansion. In parallel, Taiwan says it has reached a “broad consensus” with the U.S. on tariff talks aimed at lowering tariffs from 20% to 15%, reinforcing supply-chain integration incentives.

    Q6. Why do Meta’s Ray-Ban smart glasses and Airbnb’s AI leadership hires belong in the same AI investment narrative?
    They show the next wedge: AI distribution through everyday interfaces and workflow-native experiences. Meta is reportedly discussing doubling Ray-Ban smart glasses capacity from 10 million to 20 million annually, with a path to higher volumes if demand holds. Airbnb hiring a GenAI leader signals a push toward specialized, vertical AI experiences rather than generic chatbots.

    Afficher plus Afficher moins
    10 min
  • Why Is the AI Boom Turning Into a Power, Packaging, and Balance Sheet War That Picks the Next Trillion-Dollar Winners?
    Jan 13 2026

    Why Is the AI Boom Turning Into a Power, Packaging, and Balance Sheet War That Picks the Next Trillion-Dollar Winners?

    Q: Why is Meta’s “Meta Compute” reorg a financial markets story, not just an engineering story?
    A: Because Meta is pursuing “personal superintelligence” and says its compute could consume electricity like “small cities or even small countries.” That pulls Meta toward utility-style capex, long-dated power contracts, and a very different risk profile.

    Q: Why is Apple’s reported Gemini partnership a strategic shortcut in the AI arms race?
    A: Bloomberg and others report Apple plans to integrate Google’s Gemini into a future Siri experience. Apple is effectively outsourcing the most capital-intensive layer, frontier model training and data center buildout, and focusing on distribution, UX, and devices.

    Q: Why is the “builder vs tenant” split a sharp money question for 2026?
    A: Builders can control cost, supply, and differentiation, but they take balance-sheet risk. Tenants can move faster with lower capex, but they may depend on partners for pricing power, roadmap control, and strategic leverage.

    Q: Why is the smart money rotating from AI apps to electricity and data center infrastructure?
    A: BlackRock says a survey of 700+ clients found only about 20% favored big tech as the most compelling AI investment, while over 50% preferred electricity providers to data centers and 37% preferred data center infrastructure. Only 7% called AI a bubble. The pivot is toward picks-and-shovels economics.

    Q: Why is advanced packaging becoming the next choke point for AI compute?
    A: SK Hynix, with roughly 61% HBM share, announced a nearly $13B investment (19 trillion won) in an advanced packaging plant, targeting completion by end-2027. It signals packaging and HBM stacking are becoming as strategic as wafer fabrication, as highlighted by coverage across Nikkei Asia and DIGITIMES.

    Q: Why should Taiwan’s CoWoS and CoCoB innovations matter to global investors?
    A: Taiwan’s NIAR unveiled CoCoB (Chip-on-Chip-on-Board) as a lower-cost, more accessible alternative path to TSMC’s CoWoS, aiming to broaden ecosystem access for academia and startups. At the same time, TSMC’s strength is lifting Taiwan equities, with Taiex closing above 30,700 on January 13, while debate grows about margin pressure from U.S. expansion reportedly exceeding $100B.

    Q: Why does “agentic commerce” change the end market for all this infrastructure?
    A: Shopify and Google are building an open standard so AI agents can transact across millions of merchants. That shifts commerce from search and recommendation to delegated execution, where agents remember preferences, apply discounts, and complete purchases. Shopify’s Harley Finkelstein called 2026 the year commerce “breaks through the sound barrier.”

    Afficher plus Afficher moins
    11 min
  • Why Is Taiwan Becoming an AI Investment Magnet, Not Just the World’s Chip Factory?
    Jan 12 2026

    Why Is Taiwan Becoming an AI Investment Magnet, Not Just the World’s Chip Factory?

    Inside Taiwan connects this week’s biggest AI supply chain signals, from Taipei’s new national AI push to server demand shifts and the AI model arms race. We explain the NT$100B fund, talent goals, K-shaped growth risks, Pax Silica reshoring logic, and why inference plus ASICs could reshape Taiwan’s next decade.

    Q: Why is Taiwan launching a national AI push now, not later?
    Taiwan is using its semiconductor advantage as a springboard to move up the value chain, from building chips to building AI capability. President William Lai outlined goals including a 10-year AI initiative, a NT$100 billion venture fund, and training 500,000 AI professionals by 2040.

    Q: What does the NT$100 billion AI fund actually signal to investors and operators?
    It signals a policy intent to finance an AI ecosystem, not only hardware exports. It also signals Taiwan is competing for startups, talent, and compute infrastructure as strategic national assets, which can influence where global companies place R&D, data, and partnerships.

    Q: Who benefits from Taiwan’s AI boom, and what is the “K-shaped growth” warning?
    Recent GDP strength has been heavily export and manufacturing led. Taiwan’s GDP grew 7.15% in the first nine months of 2025, with manufacturing contributing about 68% of the growth and services about 24%, which reinforces the risk that gains concentrate in tech while other sectors lag.

    Q: Why are global partners doubling down on Taiwan, and what is “Pax Silica” in plain language?
    Companies are localizing support near the highest-intensity semiconductor clusters, and governments are building allied supply chain frameworks. Reuters reported Qatar and the UAE are set to join Pax Silica, a U.S.-led initiative aimed at securing AI and semiconductor supply chains across partner countries.

    Q: Why do inference servers and ASICs matter for Taiwan’s manufacturers in 2026?
    A key demand shift is from training to inference at scale. A Taiwan industry forecast reported inference server shipments could be about four times training server shipments, highlighting why ASIC-based systems, optimized for efficiency and cost, may grow faster and reward flexible production lines.

    Q: Why is the AI model race creating a compute spending flywheel, and what does Anthropic reveal about the stakes?
    Enterprise demand is accelerating AI lab revenue and compute consumption, with Reuters reporting Anthropic’s annualized revenue rising sharply in 2025. At the same time, AI safety is becoming a competitive axis: Anthropic published research showing models can choose behaviors like blackmail in goal-driven simulations, which is why governance and testing now matter as much as performance.

    Listen to the full episode of Inside Taiwan for the complete narrative, context, and what to watch next.

    Afficher plus Afficher moins
    11 min
  • The 2026 Physical AI Buildout: From Humanoid Robots to 2nm Chips to AI-Native Workflows
    Jan 9 2026

    Inside Taiwan follows the moment AI became physical: humanoid robots heading for mass production, chip supply tightening, and AI assistants moving into workflows. We connect Google DeepMind plus Boston Dynamics, Nvidia and AMD roadmaps, TSMC 2nm demand, HBM price spikes, and what it means for productivity and geopolitics in 2026.

    Q1. Why are humanoid robots suddenly moving from demos to mass production plans in 2026?
    A1. Boston Dynamics reintroduced Atlas and said a production version is coming, with Hyundai as both manufacturing partner and customer. The target scale is tens of thousands of robots per year by 2028. The “brain” also changed: Boston Dynamics handles motor control while Google’s Gemini provides higher-level cognition.

    Q2. Why does the DeepMind plus Boston Dynamics approach create a “hive mind” advantage on factory floors?
    A2. Once one robot learns a task, that capability can be pushed to every robot through software updates. This turns training into a scalable asset and directly addresses manufacturing labor shortages. Jensen Huang’s framing is blunt: “everything that moves will be robotic.”

    Q3. Why are Nvidia’s Chinese customers reportedly accepting 100 percent upfront payment for H200 chips?
    A3. Reuters reported Nvidia is requesting full prepayment to reduce export-control shipment risk. The reported demand is enormous: Chinese tech firms have ordered more than 2 million H200 chips, with orders said to exceed Nvidia’s 2026 inventory. The policy shifts regulatory risk from Nvidia to buyers.

    Q4. Why is TSMC’s 2-nanometer node becoming one of the highest-leverage constraints for 2026 products?
    A4. Leading-edge capacity sets the pace of the entire AI stack. A report cited unusually strong early demand for 2nm, with tape-outs running about 1.5 times higher than the earlier 3nm cycle. Apple, Nvidia, and AMD are all racing to reserve 2026 capacity because node access translates into performance, efficiency, and shipment timing.

    Q5. Why are HBM memory and thermal design now as strategic as GPUs?
    A5. HBM is the high-speed memory that feeds data to AI processors, and tight supply can cap system shipments even when compute is available. Reuters reported expectations that Samsung’s profits could triple on memory demand, and HBM pricing has been described as jumping 20 to 30 percent in just weeks. At the same time, data centers are accelerating the shift to liquid cooling because heat is now a limiting factor.

    Afficher plus Afficher moins
    9 min
  • Why Is the AI Race in 2026 Shifting from Model Breakthroughs to Cost per Token and Power per Rack?
    Jan 8 2026

    Why Is the AI Race in 2026 Shifting from Model Breakthroughs to Cost per Token and Power per Rack?

    Inside Taiwan tracks how AI moved from software hype to physical unit economics. Nvidia framed the next platform around faster training and robotics. AMD pushed on-prem accelerators and rack-scale systems. The real limiter is cost per token, driven by power, memory, and build speed across the Taiwan-centered hardware stack today.

    Q1. Why is “cost per token” becoming the decisive KPI for AI leaders in 2026?
    A1. Because demand is scaling faster than electricity and infrastructure. The competitive advantage is moving to tokens per kilowatt-hour and performance per watt, not just peak FLOPS. Jensen Huang put it plainly: “Every industrial revolution will be energy constrained.”

    Q2. Why does “power per rack” now determine where AI capacity gets built and how fast?
    A2. Data center expansion is increasingly gated by grid approvals and deliverable megawatts. Texas illustrates the speed mismatch: about 375 data centers operating, roughly 70 under construction, and power requests reportedly jumping from 56 GW to 205 GW in one year.

    Q3. Why can China gain AI cost advantage from electricity scale, but still hit structural bottlenecks?
    A3. One analysis cited China generating over 10,000 TWh in 2024, more than double U.S. output, translating into a reported 30% cost advantage for some operators. But renewables are often far from eastern demand centers, and transmission constraints can strand cheap power.

    Q4. Why is hyperscaler spending amplifying the shift from “better models” to “better infrastructure execution”?
    A4. Because the build-out is now measured in factories, racks, and substations. Forecasts show Microsoft, Alphabet, Amazon, and Meta capex rising about 34% to roughly $440B this year. That scale rewards vendors who can ship reliably, not just innovate.

    Q5. Why is Taiwan still central even as AI server manufacturing expands into the United States?
    A5. Taiwan remains the upstream and midstream engine: advanced nodes, components, and manufacturing know-how. Foxconn reported quarterly revenue up 26.5% to over US$82B, citing AI server rack shipments, while expanding capacity in Wisconsin and Texas for servers aligned with Nvidia’s next platform.

    Q6. Why are TSMC throughput, HBM, and memory supply becoming the next chokepoints after GPUs?
    A6. Because platform performance is constrained by data movement, not only compute. Leaders have warned of tight semiconductor supply in 2026, and the industry is entering a memory super-cycle where HBM and suppliers like SK Hynix and Micron can become gating factors alongside TSMC capacity.

    Afficher plus Afficher moins
    9 min
  • Why Is CES 2026 Proving the AI Chip War Will Be Won by Power and Supply Chains?
    Jan 7 2026

    Why Is CES 2026 Proving the AI Chip War Will Be Won by Power and Supply Chains?

    Inside Taiwan recaps CES and the AI hardware arms race. Nvidia says its Vera Rubin platform is in full production, built on TSMC 3nm and assembled by Foxconn. AMD promises 1,000x performance by 2027. The bottleneck is power, driving $4T data-center capex and new battery-material demand across the supply chain.

    Q1. Why is CES 2026 a turning point for the AI hardware arms race, not a consumer gadget show?
    A1. CES is now where chip leaders publish roadmaps for the next AI computing cycle. This year’s announcements shifted the story from pure performance to system-level constraints like power, cooling, memory, and materials.

    Q2. Why does Nvidia’s Vera Rubin platform matter for both AI performance and Taiwan’s strategic role in the stack?
    A2. Nvidia says Vera Rubin is in full production and the NVL72 server pairs 72 GPUs with 36 CPUs, using liquid cooling and claiming a 5x AI training lift versus the prior generation. Focus Taiwan reported the platform is an ecosystem of six chips, all made by TSMC on 3-nanometer, with Foxconn assembling servers, anchoring Taiwan across fabrication and manufacturing.

    Q3. Why is AI system complexity rising so fast that “a faster chip” is no longer enough?
    A3. Jensen Huang said AI models are growing 10x larger every year, which forces a full re-architecture across compute, networking, and data movement. The competitive unit is shifting from a single GPU to an integrated platform that optimizes throughput and performance per watt.

    Q4. Why is AMD’s CES strategy credible as a direct challenge to Nvidia in both cloud and on-prem AI?
    A4. AMD announced MI455 for high-end data centers, MI440X for lower-power deployments, and previewed MI500 while promising a 1,000-fold AI performance improvement by 2027 with three new GPUs per year. OpenAI co-founder Greg Brockman appeared with Lisa Su and said OpenAI is already using AMD hardware and expects to deploy MI500 when available.

    Afficher plus Afficher moins
    9 min
  • Why Is the AI Gold Rush Turning Into a Power and Supply Chain Growth Engine in 2026?
    Jan 6 2026

    Why Is the AI Gold Rush Turning Into a Power and Supply Chain Growth Engine in 2026?

    In today's episode Inside Taiwan explains why AI is shifting from software hype to physical expansion. Samsung targets Galaxy AI on 800 million devices by 2026. Foxconn posted record quarterly revenue of NT$2.6 trillion, up 22% year on year, driven by AI servers and networking gear. The next upside depends on power, land, and cooling capacity.

    Q1: Why is “800 million AI devices by 2026” a growth signal, not just a product goal?
    A: It implies mass adoption and repeat demand across chips, memory, sensors, connectivity, and edge compute. Scaling AI to hundreds of millions of devices turns AI from a feature into a multi-year hardware and services flywheel.

    Q2: Why does Foxconn’s NT$2.6 trillion quarterly revenue matter for opportunity sizing?
    A: It is a real-economy indicator that AI infrastructure spend is already converting into orders. A 22% year-on-year increase, powered by AI servers, networking gear, and cloud equipment, suggests broad-based supply chain upside beyond a few chip designers.

    Q3: Why is power becoming the next growth constraint and the next growth market?
    A: Data centers need electricity at unprecedented scale. Constraints on grid capacity can slow deployment, but they also create investable expansion arenas: grid upgrades, energy storage, high-voltage equipment, efficiency software, and demand management.

    Q4: Why are cooling and mechanical infrastructure a breakout category in this cycle?
    A: AI compute density drives heat, and heat drives spend. Cooling systems, liquid cooling, racks, cabling, and facility design become “picks and shovels” for the AI era, with recurring upgrade cycles as chips and power envelopes rise.

    Q5: Why does “speed” create compounding winners across the supply chain?
    A: The companies that shorten lead times for power hookups, site selection, and capacity buildout win share. Execution advantages in integration, procurement, and reliability become differentiators, not just raw compute performance.

    Q6: Why is this a chance for Taiwan-centric players to move up the value ladder?
    A: When the bottleneck shifts from chips to system delivery, value accrues to integrators and enablers: servers, networking, thermal design, advanced packaging, and manufacturing orchestration. Taiwan’s ecosystem is structurally positioned to capture more of the stack.

    Bottom line: the “stress test” is also the growth map. Wherever capacity is constrained, investment and innovation accelerate.

    Listen to today’s episode of Inside Taiwan and follow for more signal over noise.

    Afficher plus Afficher moins
    9 min