partnership – Docker https://www.docker.com Tue, 11 Jul 2023 19:51:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png partnership – Docker https://www.docker.com 32 32 Docker and Hugging Face Partner to Democratize AI https://www.docker.com/blog/docker-and-hugging-face-partner-to-democratize-ai/ Thu, 23 Mar 2023 17:43:09 +0000 https://www.docker.com/?p=41645 Today, Hugging Face and Docker are announcing a new partnership to democratize AI and make it accessible to all software engineers. Hugging Face is the most used open platform for AI, where the machine learning (ML) community has shared more than 150,000 models; 25,000 datasets; and 30,000 ML apps, including Stable Diffusion, Bloom, GPT-J, and open source ChatGPT alternatives. These apps enable the community to explore models, replicate results, and lower the barrier of entry for ML — anyone with a browser can interact with the models.

Docker is the leading toolset for easy software deployment, from infrastructure to applications. Docker is also the leading platform for software teams’ collaboration.

Docker and Hugging Face partner so you can launch and deploy complex ML apps in minutes. With the recent support for Docker on Hugging Face Spaces, folks can create any custom app they want by simply writing a Dockerfile. What’s great about Spaces is that once you’ve got your app running, you can easily share it with anyone worldwide! 🌍 Spaces provides an unparalleled level of flexibility and enables users to build ML demos with their preferred tools — from MLOps tools and FastAPI to Go endpoints and Phoenix apps.

Spaces also come with pre-defined templates of popular open source projects for members that want to get their end-to-end project in production in a matter of seconds with just a few clicks.

Screen showing options to select the Space SDK, with Docker and 3 templates selected.

Spaces enable easy deployment of ML apps in all environments, not just on Hugging Face. With “Run with Docker,” millions of software engineers can access more than 30,000 machine learning apps and run them locally or in their preferred environment.

Screen showing app options, with Run with Docker selected.

“At Hugging Face, we’ve worked on making AI more accessible and more reproducible for the past six years,” says Clem Delangue, CEO of Hugging Face. “Step 1 was to let people share models and datasets, which are the basic building blocks of AI. Step 2 was to let people build online demos for new ML techniques. Through our partnership with Docker Inc., we make great progress towards Step 3, which is to let anyone run those state-of-the-art AI models locally in a matter of minutes.”

You can also discover popular Spaces in the Docker Hub and run them locally with just a couple of commands.

To get started, read Effortlessly Build Machine Learning Apps with Hugging Face’s Docker Spaces. Or try Hugging Face Spaces now.

]]>
New Docker and JFrog Partnership Designed to Improve the Speed and Quality of App Development Processes https://www.docker.com/blog/docker-and-jfrog-team-up-to-give-developers-even-more-flexibility/ Tue, 26 Jan 2021 14:00:00 +0000 https://www.docker.com/blog/?p=27409 Today, Docker and JFrog announced a new partnership to ensure developers can benefit from integrated innovation across both companies’ offerings. This partnership sets the foundation for ongoing integration and support to help organizations increase both the velocity and quality of modern app development. 

The objective of this partnership is simple: how can we ensure developers can get the images they want and trust, and make sure they can access them in whatever development process they are using from a centralized platform? To this end, the new agreement between Docker and JFrog ensures that developers can take advantage of their Docker Subscription and Docker Hub Official Images in their Artifactory SaaS and on-premise environments so they can build, share and run apps with confidence.

At a high level, a solution based on the Docker and JFrog partnership looks like this: 

imageLikeEmbed

In this sample architecture, developers can build apps with images, including Docker Official Images and images from popular OSS projects and software companies, from Docker Hub. As images are requested, they are cached into JFrog Artifactory, where images can be managed by corporate policies, cached for high performance, and mirrored across an organization’s infrastructure. Also, the images in Artifactory can take advantage of other features in the JFrog suite, including vulnerability scanning, CI/CD pipelines, policies and more. All without limits.

This is an exciting first for Docker, as the partnership with JFrog opens up new ways of integrating leading tools to improve outcomes for developers. With integration across Docker Hub and Artifactory, premier access to the trusted high-quality Docker Official Images in Docker Hub, and secure, central access to images in Artifactory, we believe this partnership will bring immediate results to our developer communities including:

  • More value to Docker Subscription users with tight integration into private repositories
  • Premier access to trusted, high-quality images from Docker Hub
  • Central access to Docker Official Images in Artifactory
  • Streamlined application development workflows

But this is just the beginning. Over the coming months, we are working to keep improving the integration and connection here to bring new capabilities and productivity improvements to modern app developers. 
You can get started now! If you are an Artifactory user you will see the benefits of premier access to Docker Hub images right away. You can learn more about the announcement from the JFrog blog here. And, you can get technical details and how-to information from JFrog documentation.

]]>
Check out the Azure CLI experience now available in Desktop Stable https://www.docker.com/blog/check-out-the-azure-cli-experience-now-available-in-desktop-stable/ Wed, 16 Sep 2020 17:00:00 +0000 https://www.docker.com/blog/?p=27029 Back in May we announced the partnership between Docker and Microsoft to make it easier to deploy containerized applications from the Desktop to the cloud with Azure Container Instances (ACI). Then in June we were able to share the first version of this as part of a Desktop Edge release, this allowed users to use existing Docker CLI commands straight against ACI making getting started running containers in the cloud simpler than ever. 

We are now pleased to announce that the Docker and ACI integration has moved into Docker Desktop stable 2.3.0.5 giving all Desktop users access to the simplest way to get containers running in the cloud. 

Getting started 

As a new starter, to get going all you will need to do is upgrade your existing Docker Desktop to the latest stable version (2.3.0.5), store your image on Docker Hub so you can deploy it (you can get started with Hub here) and then lastly you will need to create an ACI context to deploy it to. For a simple example of getting started with ACI you can see our initial blog post on the edge experience.

More CLI commands

We have added some new features since we first released the Edge experience, one of the biggest changes was the addition of the new docker volume command. We added this in as we wanted to make sure that there was an easy way to get started creating persistent state between spinning up your containers. It is always good to use a dedicated service for a database like Cosmo DB, but while you are getting started volumes are a great way to initial store state. 

This allows you to use existing and create new volumes, to get started with a new volume we can start by selecting our ACI context:

$ docker context use myaci 

Then we can create a volume in a similar way to the Docker CLI, though in this case we have a couple of cloud specific variables we do need to provide:

$ docker volume create --storage-account mystorageaccount --fileshare test-volume

Now I can use this volume either from my CLI:

docker run -v mystorageaccount/test-volume:/target/path myimage

Or from my Compose file:

myservice:
  image: nginx
  volumes:
    - mydata:/mount/testvolumes

volumes:
  mydata:
    driver: azure_file
    driver_opts:
      share_name: test-volume
      storage_account_name: mystorageaccount

Along with this the CLI now supports some of the most popular CLI commands that were previously missing including stop, start & kill.

Try it out

If you are after some ideas of what you can do with the experience you can check out Guilluame’s blog post on running Minecraft in ACI, the whole thing takes about ~15 minutes and is a great example of how simple it is to get containers up and running. 

Microsoft offers $200 of free credits to use in your first 30 days which is a great way to try out the experience, once you have an account you will just need Docker Desktop and a Hub repo with your images saved in and you can start deploying! 

To get started today with the new Azure ACI experience, download Docker Desktop 2.3.0.5 and try out the experience yourself. If you enjoy the experience, have feedback on it or other ideas on what Docker should be working on please reach out to us on our Roadmap.

]]>
From Docker Straight to AWS https://www.docker.com/blog/from-docker-straight-to-aws/ Thu, 09 Jul 2020 16:00:00 +0000 https://www.docker.com/blog/?p=26652 Just about six years ago to the day Docker hit the first milestone for Docker Compose, a simple way to layout your containers and their connections. A talks to B, B talks to C, and C is a database. Fast forward six years and the container ecosystem has become complex.  New managed container services have arrived bringing their own runtime environments, CLIs, and configuration languages. This complexity serves the needs of the operations teams who require fine grained control, but carries a high price for developers.

One thing has remained constant over this time is that developers love the simplicity of Docker and Compose. This led us to ask, why do developers now have to choose between simple and powerful? Today, I am excited to finally be able to talk about the result of what we have been working on for over a year to provide developers power and simplicity from desktop to the cloud using Compose. Docker is expanding our strategic partnership with Amazon and integrating the Docker experience you already know and love with Amazon Elastic Container Service (ECS) with AWS Fargate. Deploying straight from Docker straight to AWS has never been easier.

Today this functionality is being made available as a beta UX using docker ecs to drive commands. Later this year when the functionality becomes generally available this will become  part of our new Docker Contexts and will allow you to  just run docker run and docker compose.

To learn more about what we are building together with Amazon go read Carmen Puccio’s post over at the Amazon Container blog. After that register for the Amazon Cloud Container Conference and come see Carmen and my session at 3:45 PM Pacific.

We are extremely excited for you to try out the public beta starting right now. In order to get started, you can sign up for a Docker ID, or use your existing Docker ID, and download the latest version of Docker Desktop Edge 2.3.3.0 which includes the new experience. You can also head straight over to the GitHub repository which will include the conference session’s demo you can follow along. We are excited for you to try it out, report issues and let us know what other features you would like to see on the Roadmap!

]]>
Docker Compose with ECS nonadult
Running a container in Microsoft Azure Container Instances (ACI) with Docker Desktop Edge https://www.docker.com/blog/running-a-container-in-aci-with-docker-desktop-edge/ Thu, 25 Jun 2020 15:59:47 +0000 https://www.docker.com/blog/?p=26558 Earlier this month Docker announced our partnership with Microsoft to shorten the developer commute between the desktop and running containers in the cloud. We are excited to announce the first release of the new Docker Azure Container Instances (ACI) experience today and wanted to give you an overview of how you can get started using it.

The new Docker and Microsoft ACI experience allows developers to easily move between working locally and in the Cloud with ACI; using the same Docker CLI experience used today! We have done this by expanding the existing docker context command to now support ACI as a new backend. We worked with Microsoft to target ACI as we felt its performance and ‘zero cost when nothing is running’ made it a great place to jump into running containers in the cloud.

ACI is a Microsoft serverless container solution for running a single Docker container or a service composed of a group of multiple containers defined with a Docker Compose file. Developers can run their containers in the cloud without needing to set up any infrastructure and take advantage of features such as mounting Azure Storage and GitHub repositories as volumes. For production cases, you can leverage Docker commands inside of an automated CI/CD flow.

Thanks to this new ACI context, you can now easily run a single container in Microsoft ACI using the docker run command but also multi-container applications using the docker compose up command.

This new experience is now available as part of Docker Desktop Edge 2.3.2 . To get started, simply download the latest Edge release or update if you are already on Desktop Edge.

Create an ACI context

Once you have the latest version, you will need to get started by logging into an Azure account. If you don’t have one you can sign up for one with $200 of credit for 30 days to try out the experience here. Once you have an account you can get started in the Docker CLI by logging into Azure: 

running a container in aci with docker desktop edge

This will load up the Azure authentication page allowing you to login using your credentials and Multi-Factor Authentication (MFA). Once you have authenticated you will see a login succeeded in the CLI, you are now ready to create your first ACI context. To do this you will need to use the docker context create aci command. You can either pass in an Azure subscription and resource group to the command or use the interactive CLI to choose them, or even create a resource group. For this example I will deploy to my default Resource Group.

docker desktop edge 1

My context is then created and I can check this using docker context ls

docker desktop edge 2

Single Container Application Example

Before I use this context, I am now going to test my application locally to check everything is working as expected. I am just going to use a very simple web server with a static HTML web page on.

I start by building my image and then running it locally:

docker desktop edge 3
docker desktop edge 4

Getting ready to run my container on ACI, I now push my image to Dockerhub using docker push bengotch/simplewhale and then change my context using docker context use myacicontext. From that moment, all the subsequent commands we will execute will be run against this ACI context.

I can check no containers are running in my new context using docker ps. Now to run my container on ACI I only need to  repeat the very same  docker run command as earlier. I can see my container is running and use the IP address to access my container running in ACI!

docker desktop edge 5
docker desktop edge 6

I can now remove my container using docker rm. Note that once the command has been executed, nothing is running on ACI and all resources are removed from ACI – resulting in no ongoing cost.

Multi-Container Application Example

With the new Docker ACI experience we can also deploy multi-container applications using Docker Compose. Let’s take a look at a simple 3 part application with a Java backend, Go frontend and postgres DB:

docker desktop edge 7

To start, I swap to my default (local) context and run a docker-compose up to run my app locally. 

docker desktop edge 8

I then check to see that I can access it and see it running locally:

docker desktop edge 9

Now I swap over to my ACI context using docker context use myacicontext and run my app again. This time I can use the new syntax docker compose up (note the lack of a ‘-’ between docker and compose).

docker desktop edge 10

And I can then go and see if this is working using its public IP address:

docker desktop edge 11

I have now run both my single container locally and in the cloud, along with running my multi-container app locally and in the cloud – all using the same artifacts and using the same Docker experience that I know and love!

Try out the new experience!

To try out the new experience, download the latest version of Docker Desktop Edge today, you can raise bugs on our beta repo and let us know what other features you would like to see integrated by adding an issue to the Docker Roadmap!

]]>
DockerCon LIVE is here! https://www.docker.com/blog/dockercon-live-is-here/ Thu, 28 May 2020 15:55:00 +0000 https://www.docker.com/blog/?p=26294 Screen Shot 2020 05 27 at 5.12.38 PM

DockerCon LIVE 2020 is about to kick off and there are over 64,000 community members, users and customers registered! Although we miss getting together in person, we’re excited to be able to bring even more people together to learn and share how Docker helps dev teams build great apps. Like DockerCon’s past there is so much great content on the agenda for you to learn and expand your expertise around containers and applications.

We’ve been very busy here at Docker and a couple of months ago, we outlined our refocused developer-focused strategy. Since then, we’ve made great progress on executing against it and remain focused on bringing simplicity to app building experience, embracing the ecosystem and helping developers and developer teams bring code to cloud faster and easier than ever before. A few examples:

We hope you can join us today for #DockerCon! There’s lots more code to cloud goodness to come from us, and we can’t wait to see what the community does next with Docker.  

]]>
Shortening the developer commute with Docker and Microsoft Azure https://www.docker.com/blog/shortening-the-developer-commute-with-docker-and-microsoft-azure/ Wed, 27 May 2020 16:00:00 +0000 https://www.docker.com/blog/?p=26287 MSFT Docker Containers

Do you remember the first time you used Docker? I do. It was about six years ago and like many folks at the time it looked like this:

docker run -it redis

I was not using Redis at the time but it seemed like a complicated enough piece of software to put this new technology through its paces. A quick Docker image pull and it was up and running. It seemed like magic. Shortly after that first Docker command I found my way to Docker Compose. At this point I knew how to run Redis and the docs had an example Python Flask application. How hard could it be to put the two together?

version: '3'
services:
  web:
    build: .
    ports:
      - "5000:5000"
  redis:
    image: “redis”

I understood immediately how Docker could help me shorten my developer “commute.” All the time I spent doing something else just to get to the work I wanted to be doing. It was awesome! 

As time passed, unfortunately my commute started to get longer again. Maybe I needed to collaborate with a colleague or get more resources then I had locally. Ok, I can run Docker in the cloud, let me see how I can get a Docker Engine. Am I going to use another tool, set up one manually, automate the whole thing? What about updates? Maybe I should use one of the managed container services? Well then I’d have to use a different CLI and perhaps a different file format. That would be totally different from what I’m using locally. Seeing my commute bloat substantially, my team and I began to work on a solution and went on a path to find collaborators with the cloud service providers.  

I am excited to finally be able to talk about the result of a collaborative set of ideas that we’ve been working on for a year to once again shorten your developer commute. Docker is expanding our strategic partnership with Microsoft and integrating the Docker experience you already know and love with Azure Container Instances (ACI). 

What does that mean for you? The same workflow in Docker Desktop and with the Docker CLI and the tools you already have with all the container compute you could want. No infrastructure to manage. No clusters to provision. When it is time to go home, docker rm will stop all the meters. We will be giving an early preview of this work on stage at DockerCon tomorrow; so please register here and watch the keynote. 

Let me give you a sense of how simple the process will be. You will even be able to log into Azure directly from the Docker CLI so you can connect to your Azure account. The login experience will feel very familiar, and odds are you have used it before for other services:

docker login azure


Once you are logged in, you just need to tell Docker that you want to use something besides the local engine. This is my favorite part– it is where in my opinion the magic lives. About a year ago we introduced Docker Context. Originally, it let you switch between engines (local or remote), Swarm, and Kubernetes. When it launched, I thought we needed to make this happen for any service that can run a container. If you want to shorten a developer’s commute, this is the way to do it.

docker context create aci-westus aci --aci-subscription-id xxx --aci-resource-group yyy --aci-location westus

All you need is a set of Azure credentials. If you have an Azure resource group you want to use you will be able to select it or we can create one for you. Once you have your Docker Context, you can tell Docker to switch to using it by default.

docker context use aci-westus

Once you have the context selected, it is just Docker. You can run individual containers. And you can run multiple containers with Docker Compose; look at Awesome Compose to find a compose file to try out. Or fire up Visual Studio Code and get back to doing what you wanted to do– writing code. As part of this strategic partnership with Microsoft, we are working closing with the Visual Studio Code teams to make sure the Docker experience is awesome.

Docker and Microsoft’s partnership has a long history. I am proud to be able to talk about what I have been working on for the last year. Together we are working on getting a beta ready for release in the second half of 2020. You can register for the beta here

For more information, check out this blog post by Paul Yuknewicz, Group Product Manager, Azure Developer Tools or read the press release.


If there are other providers you would like to see come on board to bring the simplicity of the Docker experience to the cloud, then please let us know on our public roadmap.  I am looking forward to telling you more about what else we have been working on soon!


]]>