Taiwan is home to many of the world’s leading semiconductor, system, and hardware manufacturing capabilities, making it a critical foundation of the global AI and high performance computing ecosystem. Yet translating this upstream technology leadership into large scale cloud and data center infrastructure within Taiwan remains a complex and unfinished challenge.
TWCDC 2026 brings together Asia Pacific’s leading voices across data center design, investment, energy, connectivity, and regulation to examine what must change for Taiwan to scale its infrastructure ambition. The conversation goes beyond whether Taiwan can become a regional data center hub, and instead asks a larger question: how Taiwan, together with the wider Asia Pacific ecosystem, can play a more decisive role in powering the world’s next phase of AI growth.
[主題演講] 台灣電力保障與超規模資料中心擴張藍圖
This panel addresses:
1. How the data center supply chain is moving beyond individual components to platform-level solutions
2. What global operators look for when selecting OEM and ODM partners
3. The role of speed, customization, and repeatability in large-scale deployments
4. Balancing innovation, reliability, and global compliance in AI-driven infrastructure
5. How Taiwan’s OEM ecosystem supports global deployment models, from design to operations
[開場專題討論]
從元件到平台:台灣數據中心供應鏈如何支撐全球AI基礎設施
專題重點:
1. 數據中心供應鏈如何從單一元件轉向平台級解決方案
2. 全球營運商在選擇OEM與ODM合作夥伴時的重點考量
3. 在大規模部署中,速度、客製化與可重複性的角色
4. 在AI驅動基礎設施中,如何平衡創新、可靠度與全球合規要求
5. 台灣OEM生態系如何從設計到營運,全方位支援全球部署模式
As AI workloads drive unprecedented power density and demand, energy availability is becoming the defining constraint in data center development worldwide. This keynote explores how limited grid capacity, rising energy volatility, and sustainability expectations are reshaping data center design, site selection, and operational strategies. Drawing on global experience, the session examines how operators can maximise available power through efficiency, digitalised energy management, and resilient design, while maintaining reliability and investment viability in power-constrained markets.
[主題演講] 在約束下為AI供電:能源可用性如何重新定義全球數據中心設計與營運
隨著AI工作負載帶來前所未有的電力密度與需求,能源可用性正成為全球數據中心發展的決定性限制。本演講探討有限的電網容量、能源價格波動上升,以及永續發展期望,如何重塑數據中心設計、選址與營運策略。講者將借鏡全球經驗,分析營運商如何透過提升效率、數位化能源管理與韌性設計,最大化可用電力,同時維持可靠性與投資可行性。
This panel addresses:
1. How AI workloads are reshaping investment decisions, capacity planning, and expansion strategies for data center operators
2. The operational challenges of running high density, AI ready facilities while maintaining reliability and uptime
3. Power availability, grid coordination, and energy strategy from an operator perspective across different markets
4. Key similarities and differences between global and Taiwan operating models in the AI era
[專題討論] 全球與台灣數據中心營運商對話:AI時代的投資與營運策略
專題重點:
1. AI工作負載如何重塑數據中心營運商的投資決策、容量規劃與擴張策略
2. 營運高密度、AI-ready設施的營運挑戰,同時確保可靠度與高可用性
3. 從營運商角度探討不同市場的電力供應、電網協調與能源策略
4. AI時代下,全球與台灣營運模式的關鍵相似之處與差異點
How advances in semiconductor manufacturing influence compute capability, energy efficiency, and long term planning for AI driven data centers. As silicon efficiency improves and AI workloads evolve, will today’s liquid cooling strategies and data center designs remain relevant over the next five years?
[主題演講] 矽晶主宰AI基礎設施:先進半導體製造如何定義數據中心未來藍圖
探討半導體製造技術的進展,如何影響 AI 驅動型數據中心的運算能力、能源效率,以及長期發展規劃。隨著矽效率持續提升、AI 工作負載不斷演進,現今所採用的液冷散熱策略與數據中心設計,是否在未來五年內仍具備關鍵價值與適用性?
This panel addresses:
1. Why power availability and thermal management must now be planned together as a single system for AI and high-density data centers
2. How on-site power generation, energy storage, and advanced cooling solutions can jointly unlock usable capacity in constrained markets
3. The growing role of liquid cooling and thermal design in enabling higher power density and sustained AI workloads
4. How cooling efficiency directly impacts power consumption, PUE, and long-term operability of AI infrastructure
5. Practical considerations around cost, deployment timelines, and operational complexity when combining on-site power with advanced cooling
6. How Taiwan’s thermal management ecosystem can support AI data center growth while navigating local power and infrastructure constraints
[主題演講] 電力緊縮市場下 AI 數據中心的電力供應與冷卻策略
專題重點:
1. AI 及高密度數據中心時代,電力可用性與熱管理必須同步整體規劃之必要性
2. 透過場域內發電、能源儲存及先進冷卻技術的整合,如何在電力受限市場中有效擴增可用容量
3. 液冷技術與熱設計架構在實現更高功率密度及穩定 AI 工作負載的日益重要性
4. 冷卻系統效率對電力消耗、PUE 值及 AI 基礎設施長期可持續運作的直接關聯
5. 結合場內自發電與先進冷卻方案時,成本結構、專案時程及營運複雜度之實際評估要點
6. 台灣本土熱管理供應鏈與生態系,如何協助 AI 數據中心在本地電力與基建限制下實現成長
As data center projects grow in scale and complexity, many operational issues can be traced back to decisions made during design, construction, and commissioning. This session draws on real project experience to highlight ten common mistakes encountered in data center builds, particularly in high-density and AI-ready environments. Focusing on system integration, testing, operability, and handover, the keynote provides practical guidance on how operators and consultants can identify risks early, avoid costly rework, and ensure facilities perform as intended from day one and throughout their operational lifecycle.
[主題演講] 數據中心建置十大常見錯誤及其防範策略
數據中心專案規模日益龐大且複雜化,許多後續營運問題均可追溯至設計、興建與試車階段的決策。本演講彙整真實案例經驗,特別針對高密度運算及 AI 就緒環境,剖析數據中心建置過程中經常發生的十大典型錯誤。內容聚焦系統整合、測試驗證、可操作性及移交流程,提供實務指導,協助業主、營運商與顧問團隊及早發現潛在風險、避免高成本返工,並確保設施自啟用當日起即達預期效能,並在長期營運中維持可靠與高效。
This panel addresses:
1. How AI driven demand is reshaping the balance between enterprise colocation, on-prem infrastructure, and hyperscale capacity
2. Whether traditional multi-tenant enterprise colocation models can adapt to higher power density and AI workloads
3. The growing role of hyperscale build to suit developments and what this means for operators, investors, and enterprise customers
4. Design, power, and operational challenges when serving both hyperscale and enterprise requirements
5. How market dynamics and local constraints, including power availability, influence the future of enterprise colocation in Taiwan and other global markets
[專題討論] 企業級主機代管與自有基礎設施市場展望:AI 是否將全面驅向超大規模客製化建置
專題重點:
1. AI 需求爆炸性成長如何改變企業主機代管(Colocation)、自建機房與超大規模容量之間的供需平衡
2. 傳統多租戶企業代管模式在面對高功率密度及 AI 負載時的適應性與轉型挑戰
3. 超大規模 Build-to-Suit 開發模式日益重要,對數據中心營運商、投資者及企業端客戶的策略意涵
4. 兼顧超大規模與企業客戶需求的設施設計、電力供應及日常營運所面臨的實務難題
5. 市場結構變遷與在地限制(電力供應為主)對台灣及其他全球市場企業主機代管發展的長期影響
As AI continues to move from centralized cloud environments to the edge, agentic edge AI enables devices to perceive, reason, and act locally in real time. This keynote presents a full-stack approach to building agentic edge systems, from AI acceleration IP through to customized edge chips. The session explores transformer-optimized architectures, hybrid inference models using MCP and A2A protocols, and co-design strategies that merge hardware and agent intelligence. Together, these approaches provide a practical pathway to develop next generation agentic edge solutions for vehicles, robotics, and smart devices.
[主題演講] 下一代代理式邊緣 AI 架構
隨著 AI 從集中式雲端逐步轉移到邊緣端,代理式邊緣 AI讓裝置能夠在本地即時感知、推理並自主行動。本場主題演講將呈現建構代理式邊緣系統的全棧式方法,從 AI 加速 IP 到客製化邊緣晶片。內容深入探討 Transformer 優化架構、採用 MCP 與 A2A 協議的混合推理模型,以及軟硬體共同設計策略,將硬體與代理智能深度融合。這些方法共同提供一條實務可行的路徑,助力車輛、機器人與智慧裝置開發出下一世代代理式邊緣解決方案。
This panel addresses:
1. The key drivers behind the shift from centralized cloud AI to distributed and edge-based intelligence
2. The infrastructure implications of Edge AI for compute architecture, power, cooling, and network design
3. How Edge AI complements rather than replaces cloud and hyperscale data center models
4. What operators, enterprises, and policymakers should consider when planning for a more distributed AI infrastructure landscape
[專題討論] 邊緣 AI 的興起:智能靠近數據源頭的轉變及其對基礎設施的深遠影響
專題重點:
1. 從集中式雲端 AI 轉向分散式與邊緣智能的關鍵驅動因素
2. 邊緣 AI 對運算架構、電力供應、冷卻系統與網路設計的基礎設施影響
3. 邊緣 AI 如何補充而非取代雲端與超大規模數據中心模式
4. 營運商、企業與政策制定者在規劃更分散的 AI 基礎設施格局時應考量的要點
As AI systems evolve toward distributed and agent-based architectures, connectivity is emerging as a major cost and performance driver rather than a background utility. This keynote examines how network design choices influence capital expenditure, operating costs, and long-term scalability as intelligence moves across edge, core, and data center environments.
[主題演講] 分散式智能時代的連網基礎設施
AI 系統正加速轉向分散式與基於代理的架構,連網不再是次要公用資源,而是決定成本、效能與擴展性的核心驅動因素。本演講深入分析網路設計決策對資本支出、營運支出及長期可擴展性的影響,涵蓋智能在邊緣、核心網路與數據中心間的分布與互聯需求。
Drawing on lessons from high-performance computing environments, this session explores how performance per watt, thermal efficiency, and intelligent workload management are increasingly critical to sustaining growth under power and grid limitations.
[主題演講] 高性能運算與 AI 時代的能源效率挑戰與策略
本演講以高性能運算環境的實際案例為基礎,深入分析在電力供應與電網容量受限的現實下,每瓦效能、熱效率以及智慧型工作負載管理為何成為支撐 HPC 與 AI 持續成長的決定性因素。提供產業從業人員可落地的優化方向與長期視野。
Taiwan is home to many of the world’s most critical AI and HPC ecosystem players, including TSMC, ASE Technology Holding, MediaTek, and leading system and platform builders such as Supermicro. Together, these companies underpin a significant share of global AI compute, packaging, and system integration. Yet despite this structural advantage, much of the large-scale AI and HPC infrastructure enabled by Taiwanese technology is ultimately deployed elsewhere. This panel explores why Taiwan has not fully converted its upstream leadership into downstream AI and HPC infrastructure growth at home, examining constraints such as power availability, land readiness, regulatory alignment, and investment frameworks. The discussion asks what must change for Taiwan to better anchor AI and HPC workloads locally, and how operators, investors, and policymakers can align infrastructure strategy with Taiwan’s central role in Asia’s and the world’s AI value chain.
[閉幕專題討論] 台灣在亞洲 HPC 與 AI 成長中的更大角色:從上游優勢到本土基礎設施擴張
台灣擁有全球最關鍵的 AI 與 高性能運算生態系玩家,包括台積電、日月光投控、聯發科,以及廣達、緯穎等領先系統與平台建置商,這些企業支撐了全球 AI 運算、封裝與系統整合的重大份額。然而,儘管擁有如此結構性優勢,由台灣技術驅動的大規模 AI 與 高性能運算基礎設施,多數仍部署在海外。本場閉幕專題探討為何台灣尚未將上游領導地位充分轉化為本土下游 AI/高性能運算基礎設施成長,檢視電力供應、土地整備、監管協調與投資架構等關鍵限制。討論焦點在於:台灣需改變哪些條件,才能更有效將 AI與高性能運算工作負載錨定本地?營運商、投資人與政策制定者又該如何對齊基礎設施策略,強化台灣在亞洲乃至全球AI價值鏈的核心地位?
Please head to the registration area and bring the registration confirmation email with QR code which was sent to your email address. Badges will be printed on-site at the technology event.
Upon successful completion of registration through the Cloud and Datacenter Convention website, your QR code will be sent to your email address that you have input in the billing page.
Please wait up to 48 hours upon event registration for the confirmation email to be sent and do check your spam folder. If you still cannot find it, please contact us via [email protected].
Cloud and Datacenter professionals and partners are welcome to attend for the technology discussions and networking opportunities. All attendees must register via the Cloud and Datacenter Convention event page.
It is recommended to register online at the Cloud and Datacenter Convention event page website to prevent any delays on event day.
Participation in multiple technology events is welcome to attendees. W.Media hosts over 25 global events across APAC and beyond. In many cases, global companies and VIP delegates and speakers will travel overseas to attend multiple Cloud and Datacenter Conventions.
To find more information on sponsorship opportunities, including on-site branding, exhibitor booths, speaking slot, digital branding, lead generation, and more, please fill out the inquiry form here.
The 2025 Cloud and Datacenter Convention event pages for each of our events will have the most up to date information including the timings, location, agenda, speaker and sponsor list, and more.
Complimentary breakfast, coffee and tea, lunch, and evening networking drinks will be provided for attendees on a first come, first served basis in the expo area.
There will not be a live stream of the event, however there will be coverage of the event, including photographs and interviews as well as articles regarding the event, will be published by W.Media and other media partners.
This content will be posted to the W.Media LinkedIn page, the Cloud and Datacenter Convention Page, the W.Media Newsletter and to the W.Media website and Centerstage page.
If you are interested in becoming a media partner, please contact us at [email protected].