What is a Linux container?

Copy URL

Linux® container is a set of 1 or more processes that are isolated from the rest of the system. All the files necessary to run them are provided from a distinct image, meaning Linux containers are portable and consistent as they move from development, to testing, to production. This makes using them much faster than development pipelines that rely on replicating traditional testing environments. By building security into the container pipeline and defending infrastructure, containers stay reliable, scalable, and trusted. You can also easily move the containerized application between public, private, on-premise, and hybrid cloud environments and datacenters while maintaining consistent behavior and functionality.

Try using Linux containers

Imagine you’re developing an application. You’re working on a laptop and your environment has a specific configuration. Other developers may have slightly different configurations. The application you’re developing relies on that configuration and its specific libraries, dependencies, and files. Meanwhile, your business has development and production environments that are standardized with their own configurations and their own sets of supporting files. You want to emulate those environments as much as possible locally, but without all the overhead of recreating the server environments. One way to make your app work across these environments, pass quality assurance, and deploy without massive headaches, rewriting, and time-consuming disaster recovery is to use containers. 

Containers help reduce conflicts between your development and operations teams by separating areas of responsibility. Developers can focus on their apps and operations teams can focus on the infrastructure. And because containers are based on open source technology, you can benefit from the latest advancements as soon as they’re available. Container technologies—including PodmanSkopeoBuildahCRI-OKubernetes, and Docker—help your team simplify, accelerate, rapidly scale, and orchestrate application development and deployment.

Why choose Red Hat for containers?

The Linux Containers project (LXC) is an open source container platform that provides a set of tools, templates, libraries, and language bindings. LXC has a simple command line interface that improves the user experience when starting containers.

LXC offers an operating system level virtualization environment that is available to be installed on many Linux-based systems. It may be available through the package repository of your Linux distribution.

The idea of what we now call container technology first appeared in 2000 as FreeBSD jails, a technology that allows the partitioning of a FreeBSD system into multiple subsystems, or jails. Jails were developed as safe environments that a system administrator could share with multiple users inside or outside of an organization.

In 2001, an implementation of an isolated environment made its way into Linux, by way of Jacques Gélinas’ VServer project. This set the foundation for multiple controlled userspaces in Linux, which would ultimately form what is today’s Linux container.

Very quickly, more technologies combined to realize this isolated approach. Control groups (cgroups) is a kernel feature that controls and limits resource usage for a process or groups of processes. And systemd, an initialization system that sets up the userspace and manages their processes, is used by cgroups to provide greater control over these isolated processes. Both of these technologies, while adding overall control for Linux, were the framework for how environments could be successful in staying separated.

Enter Docker

Docker came onto the scene in 2008 (by way of dotCloud) with their eponymous container technology. The Docker technology added new concepts and tools—a simple command line interface for running and building new layered images, a server daemon, a library of pre-built container images, and the concept of a registry server. Combined, these technologies allowed users to build new layered containers quickly and easily share them with others.

There are 3 major standards to ensure interoperability of container technologies—the OCI Image, Distribution, and Runtime specifications. Together, these specifications allow community projects, commercial products, and cloud providers to build interoperable container technologies (think of pushing your custom built images into a cloud provider’s registry server). Today Red Hat and Docker, among many others, are members of the Open Container Initiative (OCI)—an open governance structure that creates open industry standards for container formats, image specifications, and runtimes. 

Learn more about the Open Container Initiative

Containers share the same operating system kernel and isolate the application processes from the rest of the system so the whole thing can be migrated, opened, and used across development, testing, and production configurations. Because they are lightweight and portable, containers provide the opportunity for faster development and meeting business needs as they arise. Container orchestration is how you manage these deployments across an enterprise.

Kubernetes is an open source platform that automates Linux container operations. It eliminates many of the manual processes involved in deploying, scaling, and monitoring the health of containerized applications. Kubernetes allows you to dynamically allocate resources based on demand, which can improve performance and prevent overprovisioning. Kubernetes provides the platform to schedule and run containers on clusters of physical or virtual machines. Kubernetes architecture divides a cluster into components that work together to maintain the cluster's defined state. Red Hat® OpenShift® is a certified Kubernetes offering by the Cloud Native Computing Foundation (CNCF), and uses Kubernetes as a foundation for a complete platform to deliver cloud-native applications in a consistent way across hybrid cloud environments.

Learn more about Red Hat OpenShift as a modern application platform

Virtualization uses software to create an abstraction layer that allows computer hardware elements, guided by a hypervisor, to be divided into multiple virtual computers. These virtual machines (VMs) are environments in which containers can run, but containers aren’t tied to virtual environments. Containers are way of handling virtualization that lets you provision resources faster and make new applications available more quickly because containers don’t require a hypervisor. Some software—like Red Hat® OpenShift® Virtualization—can both orchestrate containers and manage virtual machines, but containers and virtual machines are complementary but distinct approaches:

  • Virtualization allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system.
  • Containers share the same operating system kernel and isolate the application processes from the rest of the system. For example: ARM Linux systems run ARM Linux containers, x86 Linux systems run x86 Linux containers, x86 Windows systems run x86 Windows containers. Linux containers are extremely portable, but they must be compatible with the underlying system.

VMs have limited capabilities because the hypervisors that create them are tied to the resources of a physical machine. Containers, on the other hand, share the same operating system kernel and package applications. Containers virtualize the operating system instead of the hardware, so containers and VMs function differently despite having similar resource isolation and allocation capabilities. 

Containers and virtual machines can also be used together within IT infrastructures. In hybrid environments, containerized and VM-based applications can run on the same infrastructure, which creates a flexible approach that is adaptable for different needs. This is known as container-native virtualization. Organizations can use their existing VM infrastructure to host containerized applications or migrate workloads to containers over time without completely restructuring their entire IT. 

Learn more about Red Hat OpenShift Virtualization

Nothing is secure by default. There are a lot of moving parts to container security—you need to protect the container pipeline and applications as well as the deployment environments and infrastructure, and you need a plan for integrating with enterprise security tools and policies. Static security policies and checklists don’t scale for containers in the enterprise, so you need to know how to build better security into the container pipeline.

DevSecOps approach is one where culture, automation, and platform design are integrated with security—and security is treated a shared responsibility among teams. Container security policies should address isolation and trust across communication pathways and access controls, and use tools for malware scanning and image signing to address potential vulnerabilities that may arise while containers are operating. A DevSecOps approach can help you maintain consistent security throughout the entire IT lifecycle.

Find out more about container security

Serverless is a cloud-native development model that allows developers to build and run applications without having to manage servers. The term “serverless” doesn’t mean there are no servers. It means the servers are abstracted away from application development. A cloud provider handles the routine work of provisioning, maintaining, and scaling the server infrastructure. 

With serverless, developers package their code in containers for deployment. Once deployed, serverless apps respond to demand and automatically scale up or down as needed. Serverless offerings from public cloud providers are usually metered on demand using an event-driven execution model. As a result, when a serverless function is sitting idle, it doesn’t cost anything.

Find out more about Red Hat OpenShift Serverless

You can deploy containers for a number of workloads and use cases. Containers give your team the underlying technology needed for a cloud-native development style, so you can get started with DevOpsCI/CD (continuous integration and continuous deployment), and go serverless.

Container-based applications can work across highly distributed cloud architectures. Application runtimes provides tools to support a unified environment for development, delivery, integration, and automation.

You can also deploy integration technologies in containers, so you can easily scale how you connect apps and data, like real-time data streaming through Apache Kafka. If you're building a microservices architecture, containers are the ideal deployment unit for each microservice and the service mesh network that connects them.

Red Hat has a long history of working in the open source community to make containers secure, stable, reliable, and supported. Red Hat is also the second largest contributor to the Docker and Kubernetes codebases and works with the OCI and the Cloud Native Computing Foundation to improve container features, reliability, and security. As with all open source projects, Red Hat contributes code and improvements back to the upstream codebase—sharing advancements along the way.

Red Hat’s container-focused solutions and training offerings give you the infrastructure, platform, control, and knowledge to take advantage of everything containers have to offer. Whether it’s getting your development teams on a platform built with containers in mind, running your container infrastructure on an efficient and effective operating system, or providing storage solutions for the massive data generated by containers, Red Hat’s solutions have you covered.

Red Hat OpenShift delivers a scalable approach to security and reliability for containers and critical applications. With its comprehensive set of tools and services, Red Hat OpenShift streamlines the entire lifecycle of application development—from building and deploying to running and managing. It helps simplify the complexities of application modernization efforts, including building and modernizing applications with AI across multicloud and hybrid environments, enhancing efficiency and productivity for developers and IT operations teams. 

Red Hat Enterprise Linux serves as a reliable foundation for the next steps in your IT journey from migrating to the cloud to harnessing the power of edge and containers to experimenting with AI workloads. Developers can streamline application development with access to container tools, databases, programming languages, and runtimes while benefitting from a consistent experience across footprints. You can reduce complexity and increase portability and standardization with container tools while assembling customized and hardened Red Hat Enterprise Linux operating system images.

Image mode for Red Hat Enterprise Linux is a simple, consistent approach to building, deploying, and managing the OS using container technologies. Manage your OS with the same container tools and workflows as applications to create a consistent experience and common language across teams. Image mode addresses the challenge of drift by helping keep your servers’ configurations the same, eliminating deviations that can lead to system instability and security risks. It also helps enhance security by reducing attack surfaces with immutable system images, ensuring you know exactly what’s in each image. 

Red Hat named a Leader in the 2024 Gartner® Magic Quadrant™ for second consecutive year

Red Hat was named a Leader in the Gartner 2024 Magic Quadrant for Container Management. This year, Red Hat was positioned furthest on the Completeness of Vision axis.

Red Hat OpenShift Container Platform

A consistent hybrid cloud foundation for building and scaling containerized applications.

Keep reading

What is container orchestration?

Container orchestration automates the deployment, management, scaling, and networking of containers.

What is Kubernetes?

Kubernetes is a container orchestration platform that eliminates many manual processes involved in deploying and scaling containerized applications.

What is the Kubernetes Java client?

The Kubernetes Java client is a client library that enables the use of the Java programming language to interface with Kubernetes.

Containers resources

Related articles