Amazon S3 Files gives the world’s biggest object store a file system
This matters because cloud-native tooling and platform engineering are reshaping how data teams build, deploy, and operate production data systems.
Amazon S3 Files gives the world’s biggest object store a file system
Amazon S3 can now be a file system. On Monday, AWS launched a new S3 feature called S3 Files that The post Amazon S3 Files gives the world’s biggest object store a file system appeared first on The New Stack.
Editorial Analysis
S3 Files represents a pragmatic acknowledgment that object storage and traditional file systems serve different purposes, and AWS is finally closing that gap. For years, we've worked around S3's eventual consistency model and lack of hierarchical locking by abstracting it behind Delta Lake or Apache Iceberg. Now, treating S3 as a first-class file system means simpler data pipelines, better compatibility with legacy tooling, and reduced cognitive overhead when architecting. The operational implication is significant: teams can deprecate unnecessarily complex abstraction layers and run streaming workloads, batch jobs, and interactive queries directly against S3 without sacrificing performance or reliability. This fits the broader shift toward unified data platforms where compute is decoupled from storage. My recommendation is to evaluate S3 Files for new greenfield projects, particularly where your stack already centers on AWS services like EMR or SageMaker. Existing Iceberg implementations should stay put—they've solved harder problems. The real win is simplifying onboarding and reducing infrastructure sprawl for teams not yet locked into table formats.