Couverture de LogiCast AWS News

LogiCast AWS News

LogiCast AWS News

De : Logicata
Écouter gratuitement

À propos de ce contenu audio

LogiCast, brought to you by Logicata, is a weekly AWS News podcast hosted by Karl Robinson, CEO and Co-Founder of Logicata, and Jon Goodall, Lead Cloud Engineer. Each week we hand-pick a selection of news articles on Amazon Web Services (AWS) - we look at what’s new, technical how-to, and business-related news articles and take a deep dive, giving commentary, opinion, and a sprinkling of humor. Please note this is the audio only version of Logicast. If you would like the video version, please check out https://logicastvideo.podbean.com/Copyright 2025 All rights reserved. Politique et gouvernement
Épisodes
  • Season 5 Episode 15: Interconnect, Migrations, and Modular Data Centers
    Apr 21 2026
    In Season 5, Episode 15, Karl and Jon are joined by Damien Jones, an AWS Community Builder, to discuss AWS Interconnect, now generally available for multi-cloud connectivity with Google Cloud Platform, with Azure and Oracle Cloud coming later; database migration acceleration using Kiro and Amazon Bedrock Agent Core to speed up migrations to Amazon Aurora DSQL; Project Glasswing, Anthropic’s restricted-preview model for detecting AI-driven cyberattacks and identifying vulnerabilities; Amazon’s AI revenue, with the CEO revealing $15 billion in annualized AI services revenue, roughly 10% of AWS’s run rate; and Project Houdini, AWS’s initiative using prefabricated modular data centers to accelerate construction timelines. And, of course, the guys got excited about the prospect of a Lidl cloud platform... 07:40 - AWS Interconnect - Multi-Cloud Connectivity AWS has announced the general availability of AWS Interconnect, a dedicated service for connecting AWS with other cloud providers more reliably and efficiently than VPNs. It currently supports Google Cloud, with Azure and Oracle Cloud expected by late 2026. Pricing depends on capacity and distance, starting around $90,000 per month for 10 Gbps between nearby regions and rising to nearly $400,000 for longer cross-region links. AWS has also open-sourced the specification on GitHub to encourage broader adoption. The service removes unpredictable internet egress fees and guarantees capacity, making it most relevant for large enterprises with hybrid or multi-cloud environments. Still, it is a premium solution for moving data between clouds, not for reducing multi-cloud complexity itself. 17:22 - Accelerating Database Migration with Kiro and Bedrock Agent Corp AWS shared a technical guide showing how Kiro and Amazon Bedrock Agent Core can speed up schema analysis for migrations to Amazon Aurora DSQL. The approach helps identify schema mapping needs and compatibility issues early, reducing the need for deep migration expertise during planning. But the discussion raised concerns about production readiness: it depends on persistent Kiro CLI sessions that lose in-memory analysis if interrupted, forcing a restart, and it lacks the real-time observability of native AWS DMS tools. While useful for proof-of-concept work and easing upfront analysis, the panelists were cautious about recommending it for production migrations without stronger persistence and observability. More broadly, they noted that AI-driven “faster” database migration tooling is part of a familiar cycle, while the core migration challenges remain largely the same. 27:06 - Project Glasswing - AI-Driven Cybersecurity Tool Anthropic launched Project Glasswing, a restricted-preview model aimed at detecting and preventing AI-driven cyberattacks by finding software vulnerabilities. It reportedly uncovered thousands of critical bugs in core internet infrastructure, including projects like FFMPEG and OpenSSL, often maintained by very small teams. Access is limited to about 40 organizations, which sparked debate over publicly promoting a powerful tool that few can use. The discussion raised concerns about a two-tier security landscape, possible future “Glasswing scan” requirements in cyber insurance, and broader AI safety issues as models grow more capable. While restricting access to dangerous tools may be sensible, the panelists argued that the public hype creates perverse incentives and could let a small group of firms charge premium prices for exclusive access. 38:29 - Amazon AI Revenue and Investment Strategy Amazon CEO Andy Jassy said in the annual shareholder letter that AWS AI services are now generating more than $15 billion in annualized revenue, about 10% of AWS’s total run rate. Amazon has committed $200 billion in capital spending for data centers and AI chips, reflecting strong demand for specialized infrastructure. Jassy also noted that two major AWS customers asked for exclusive access to all Graviton capacity in 2026, a request Amazon declined to avoid limiting other customers. The letter underscored the strategic value of AWS’s AI and chip business, with discussion pointing to a more disciplined approach: Amazon is now securing customer demand before building capacity, with production already committed into 2027 and 2028. While the ROI horizon is still long given the scale of spending, demand and adoption appear to be accelerating. 45:16 - Project Houdini - Prefabricated Data Center Construction AWS announced Project Houdini, which uses prefabricated modular data center units, or “skids,” to speed up data center construction and AI infrastructure deployment. While the idea of prefabrication is not new, AWS is standardizing it at scale to cut build times. The panelists noted the bigger constraint is power, not construction: aging UK and European grids are already under strain and often cannot support modern data center demand. That has pushed ...
    Afficher plus Afficher moins
    52 min
  • Season 5 Episode 14: S3 Files, Kubernetes Scaling, and the SaaSpocalypse
    Apr 13 2026

    In Season 5, Episode 14, Karl and Jon are joined by Destiny Erhabor, an AWS Community Builder, to discuss S3 Files Launch, AWS’s new file system interface for S3 buckets that provides POSIX-compliant access to S3 data through a cached file system layer. They also cover EKS Managed Node Groups with EC2 Auto Scaling Warm Pools, a new feature that simplifies Kubernetes cluster auto-scaling and reduces operational complexity; the ongoing AWS Middle East data center disruptions caused by drone strikes, including full-month service credits and emergency restoration efforts; AWS’s AI investment strategy, including its simultaneous investments in Anthropic and OpenAI and how that positions it against Amazon Nova models; and the broader AI hype cycle, including whether AI could disrupt SaaS business models in a so-called “SASSpocalypse” and what kind of real ROI companies are actually seeing from AI investments. And, for the record, no crimes were committed during the recording of this podcast.

    03:19 - S3 Files Launch - Making S3 Buckets Accessible as File Systems

    AWS's new file system interface for S3 buckets, providing POSIX-compliant access to S3 data through a cached file system layer

    15:54 - EKS Managed Node Groups Now Support EC2 Auto Scaling Warm Pools

    New feature simplifying Kubernetes cluster auto-scaling and reducing operational complexity

    22:26 - WS Teams Working Round-the-Clock to Restore Middle East Region Services Following Drone Strikes

    Ongoing impact of drone strikes on Middle East regions, including full month service credits and emergency restoration efforts

    31:08 - AWS CEO Matt Garman Defends Simultaneous Multi-Billion Dollar Investments in Anthropic and OpenAI

    Discussion of AWS's simultaneous investments in Anthropic and OpenAI, and competitive positioning with Amazon Nova models

    37:01 - AWS CEO Addresses AI "SASSpocalypse" Concerns at Human X Conference

    Debate over whether AI will disrupt SaaS business models and discussion of genuine ROI from AI investments

    Afficher plus Afficher moins
    44 min
  • Season 5 Episode 13: Agents, Instances, and Supply Chain Attacks
    Apr 8 2026
    In Season 5, Episode 13, Karl and Jon discuss a packed lineup of AWS news, including the general availability of AWS DevOps Agent with autonomous incident response capabilities, support for EC2 instance store in Amazon ECS Managed Instances for latency-sensitive workloads, and the introduction of managed daemons for managed instances, similar to Kubernetes DaemonSets. They also cover how to build high-performance applications with AWS Lambda managed instances, a migration guide for moving from Amazon ElastiCache for Redis to ElistiCache for Valkey, and the European Commission data breach involving a compromised AWS account through a supply chain attack on Aqua Security’s Trivy vulnerability scanner. And along the way, the guys realize that Karl’s muscle memory for intro titles is apparently so bad, he could probably forget his own name if he took a week off. 03:24 - AWS DevOps Agent General Availability and autonomous Incident Response with DevOps Agent AWS DevOps Agent has officially moved from preview to general availability. This service acts as an autonomous incident investigation tool that can analyze logs, telemetry, and infrastructure metrics to help teams understand what's going wrong during incidents. Rather than replacing human SREs, it accelerates the investigation phase by correlating data from multiple sources (CloudWatch logs, monitoring tools, error messages) and reducing the time spent in manual troubleshooting. The tool can be integrated with existing monitoring platforms like PagerDuty, Datadog, New Relic, and Grafana. It supports "skills" (essentially runbooks or if-then rules) that can be customized for known failure patterns specific to an organization's infrastructure. Currently in GA, it can perform investigations but cannot yet execute remediation actions, though this is expected as a future capability. Notable customers in production include Western Governors University, ZenChef, T-Mobile, and Granola. This article provides a practical walkthrough for implementing DevOps Agent in AWS environments to handle incident response workflows. It demonstrates how to set up the integration between incident management systems and DevOps Agent, allowing automated investigation workflows to be triggered when alerts fire. The article shows bidirectional integration with services like PagerDuty (which can feed alerts into DevOps Agent) and Slack (for notifications), and outbound capabilities to create incidents or update existing ones. The key value proposition is that the tool can handle approximately 80% of the incident investigation burden—the time-consuming process of correlating logs, metrics, and events—while human engineers remain responsible for decision-making and remediation approvals. 14:44 - Amazon ECS Managed Instances Support for EC2 Instant Store and Amazon ECS Managed Daemons for Managed Instances Amazon ECS Managed Instances now supports EC2 instant store volumes, which are high-performance local storage options connected directly to physical instances. Instant store provides lower latency than EBS volumes since it's attached directly to the hardware rather than accessed over a network. This feature is primarily useful for highly latency-sensitive containerized workloads that require extremely fast disk access. While the number of use cases for this is relatively niche, it enables scenarios where applications need local, high-speed temporary storage without the network latency overhead of EBS volumes. This represents one of several enhancements to ECS Managed Instances announced recently. ECS Managed Instances now supports managed daemons, a capability analogous to Kubernetes DaemonSets. This feature ensures that exactly one instance of a specified container runs on every node in an ECS cluster. This is particularly useful for system-level services that need to be present on all instances—such as monitoring agents (New Relic, Datadog), log collectors, or security scanning tools. Previously, this functionality was available for traditional self-managed EC2 compute but was missing from managed instances. The feature automatically scales with cluster size: adding a new instance to the cluster automatically deploys the daemon, and removing an instance removes it accordingly. This brings ECS Managed Instances to feature parity with self-managed EC2 deployments for daemon-like workloads. 20:10 - Building High-Performance Apps with AWS Lambda Managed Instances AWS has published guidance on using Lambda managed instances for high-performance computing scenarios. Lambda managed instances allow developers to run Lambda functions on dedicated EC2 instances that AWS manages, providing higher resource availability than traditional Lambda. This hybrid approach enables use cases requiring consistent high CPU capacity, GPU access, or sustained high concurrency that traditional Lambda (which has memory/CPU scaling limits) cannot efficiently support. However, this ...
    Afficher plus Afficher moins
    38 min
Aucun commentaire pour le moment