成人抖音官网,懂片帝app软件,午夜影院电影网址Docker Blog http://www.qiwei365.com/blog Mon, 23 Mar 2020 21:52:16 +0000 en-US hourly 1 http://wordpress.org/?v=5.3.2 83253124 哥哥欢在线,大香蕉在线社区,猫咪软件app最新破解版#myDockerBday Discounts on Docker Captain Docker + Kubernetes Content http://www.qiwei365.com/blog/mydockerbday-discounts-on-docker-captain-content/ Mon, 23 Mar 2020 21:52:14 +0000 http://www.qiwei365.com/blog/?p=25864 If your #myDockerBday celebration included wanting to learn more about Docker or Kubernetes, you are in luck. In honor of Docker’s 7th birthday, Docker Captains have extended some fantastic deals on their learning content. Take this opportunity to level up your skills and learn Docker with excellent instructors.  Books and EBooks Through the end of […]

The post #myDockerBday Discounts on Docker Captain Docker + Kubernetes Content appeared first on Docker Blog.

]]>
If your #myDockerBday celebration included wanting to learn more about Docker or Kubernetes, you are in luck. In honor of Docker’s 7th birthday, Docker Captains have extended some fantastic deals on their learning content. Take this opportunity to level up your skills and learn Docker with excellent instructors. 

Books and EBooks

Through the end of March, you can get Elton Stoneman’s Learn Docker in a Month of Lunches and/or Jeff Nickeloff’s Docker in Action 2nd Edition for 40% off using the code mtpdocker20.

Learn Docker in a Month of Lunches

Elton Stoneman
Go from zero to production readiness with Docker in 22 bite-sized lessons! Learn Docker in a Month of Lunches波多野结高清无码中文 is an accessible task-focused guide to Docker on Linux, Windows, or Mac systems. In it, you’ll learn practical Docker skills to help you tackle the challenges of modern IT, from cloud migration and microservices to handling legacy systems. There’s no excessive theory or niche-use cases– just a quick-and-easy guide to the essentials of Docker you’ll use every day.

Docker in Action 2nd Edition

Jeff Nickeloff
Docker in Action, Second Edition波多野结高清无码中文 teaches you the skills and knowledge you need to create, deploy, and manage applications hosted in Docker containers. This bestseller has been fully updated with new examples, best practices, and a number of entirely new chapters.

Video Courses

波多野结高清无码中文And through Saturday, March 28th, get the following Captain courses on Udemy for just $9.99.

Kubernetes 101

Nigel Poulton

Learn or brush up with the basics of Kubernetes. Kubernetes architecture is clearly explained, Nigel will show you how to deploy, break, self-heal, scale, and perform rolling updates on a simple application. This course is perfect in helping you master the fundamentals.
Code: DOCKERBIRTHDAY


Docker Mastery

Bret Fisher

波多野结高清无码中文Build, test, deploy containers with the best mega-course on Docker, Kubernetes, Compose, Swarm and Registry using DevOps

Code: DOCKER_IS_7_C1

Kubernetes Mastery

Bret Fisher

Learn the latest Kubernetes features (1.16) and plugins while practicing DevOps workflows, from a container expert.

Code: DOCKER_IS_7_C4

Swarm Mastery

Bret Fisher

波多野结高清无码中文Build, automate and monitor a service cluster for containers using the latest open source on Linux and Windows.

Code: DOCKER_IS_7_C2

Docker for Node.js

Bret Fisher

Build, test, deploy Node for Docker, Kubernetes, Swarm, and ARM with the latest DevOps practices from a container expert.Code: DOCKER_IS_7_C3

Wait…there’s more!

Continue the celebration with Docker and the Captains by joining the:

  • 7th Birthday Challenge. Learn some of the Docker Captain’s favorite Tips + Tricks by completing 7 hands-on exercises. Earn a virtual badge for each exercise completed.
  • #myDockerBday Live Show. Celebrate Docker’s Birthday with a special 3-hour live show featuring exclusive conversations with the Docker team and Captains, open Q&A sessions, and prizes. To reserve a spot, sign up here

The post #myDockerBday Discounts on Docker Captain Docker + Kubernetes Content appeared first on Docker Blog.

]]>
25864
220视频,熊猫影视app下载,乳此傲人First Docker GitHub Action is here! http://www.qiwei365.com/blog/first-docker-github-action-is-here/ Tue, 17 Mar 2020 09:45:12 +0000 http://www.qiwei365.com/blog/?p=25749 We are happy to announce that today Docker has released its first Github Action! We’ve been working with GitHub, looking into how developers have been using GitHub Actions with Docker to set up their CI/CD workflows. The standard flows you’ll see if you look around are what you’d expect: building an image, tagging it, logging […]

The post First Docker GitHub Action is here! appeared first on Docker Blog.

]]>
We are happy to announce that today Docker has released its first Github Action! We’ve been working with GitHub, looking into how developers have been using GitHub Actions with Docker to set up their CI/CD workflows. The standard flows you’ll see if you look around are what you’d expect: building an image, tagging it, logging into Hub, and pushing the image. This is the workflow we’ve aimed to support with our Docker build-push action.

Simplify CI/CD workflows

At Docker traditionally much of our CI/CD workflows has been handled through Jenkins using a variety of products to set up and maintain it. For some things this is the best solution like when we are testing Docker Desktop on a whole variety of different hosts and configurations. For others it’s a bit overkill. Like many, we at Docker have been looking at how we can leverage GitHub Actions to simplify our workflows, including how we use Docker itself.

GitHub Actions already leverages Docker in a lot of its workflows. From having Docker pre-installed and configured on the cloud runners to having first class support for containerized actions allows developers to easily use the same Docker workflows they use locally to configure their repo’s CI/CD. Combined with multi-stage builds and you have a powerful environment to work with. 

Docker actions

When we started with Github Actions there were no built-in actions to handle our main build, tag and push flow so we end up with a yaml file, that can’t yet be run locally, full of bash commands. Indeed that’s exactly what you’re given if you choose the ?a href="http://github.com/actions/starter-workflows/blob/master/ci/docker-publish.yml">Docker Publish?workflow template from inside GitHub. Though it’s certainly doable it’s not as easy to read and maintain as a script that just uses pre-built actions. This is likely why the community has already published a whole host of actions to do just that. Just go to the GitHub Marketplace波多野结高清无码中文 and search for Docker actions.

Common things you’ll see beyond just the standard build/tag/push is supporting automatic tagging of images based on the branch you’re building from, logging in to private registries, and setting standard CLI arguments like the Dockerfile path.

Having looked at a number of these we decided to build our own actions off of these ideas and publish them back to the community as official Docker supported GitHub Actions. The first of these, docker/build-push-action, supports much of what has been written above and attempts to build and push images with what we consider to be best practices including:

  • Tagging based on the git ref (branches, tags, and PRs).
  • Tagging with the git SHA to make it easy to grab the image in later stages of more complex CI/CD flows. For example where you need to run end-to-end tests in a large, self-hosted cluster.
  • Labelling the image with Open Container Initiative labels using data pulled from the GitHub Actions environment.
  • Support for build time arguments and multi-stage targets.
  • Push filter that allows you to configure when just to build the image and when to actually push it depending on any of the data supplied by GitHub Actions and your own scripts. See the examples for one we use ourselves.

A single action approach

But why one big action instead of many small ones? One thing that came up in our discussions with GitHub is how they envisaged that users would create many small actions and chain them together using inputs and outputs but the reality looks to be the opposite. From what we had seen users have been creating big actions and handling the flows internally using inputs for configuration details.

波多野结高清无码中文Whilst developing our own actions we found ourselves going the same way, firstly because it’s simply easier to test that way as there currently isn’t any way to run the workflow script locally.

Also this:

name: build
  id: build
  uses: docker/build-action@v1
  with:
    repository: myorg/myrepo
    tags: v1
name: login
  uses: docker/login-action@v1
  with:
    registry: myregistry
    username: ${{ DOCKER_USERNAME }}
    password: ${{ DOCKER_PASSWORD }}
name: push
  uses: docker/push-action@v1
  with:
    registry: myregistry
    tags: ${{ outputs.build.tags }}

Is a bit more effort to write than:

name: build-push
  uses: docker/build-push-action@v1
  with:
    username: ${{ DOCKER_USERNAME }}
    password: ${{ DOCKER_PASSWORD }}
    registry: myregistry
    repository: myorg/myrepo
    tags: v1

The final reason we went with the single action approach was that the logic of how the separate steps link and when they should be skipped is simple to handle in the backend based purely on a couple of inputs. Are the username and password set? Then do a login. Should we push? Then push with the tags that we built the image with. Is the registry set? Then log in to that registry, tag the images with that registry, and push to it rather than defaulting to Docker Hub.

Feedback is welcome!

All of this is handled by the image that backs the action. The backend is a simple go program that shells out to the Docker CLI, the code for which can be found here and is built and pushed using the action itself. As always, feedback and contributions are always welcome.

If you want to try out our Docker Github Action you can find it here or if you haven’t used Github actions before you can find a guide to get started by Github here. For more news on what else to expect coming soon from Docker remember to look at our public roadmap

The post First Docker GitHub Action is here! appeared first on Docker Blog.

]]>
25749
大香蕉伊人网,夜宴视频,水蜜桃视频app官网DockerCon LIVE with theCUBE: Call for Papers is Open http://www.qiwei365.com/blog/dockercon-live-with-thecube-call-for-papers-is-open/ Mon, 16 Mar 2020 19:56:04 +0000 http://www.qiwei365.com/blog/?p=25767 波多野结高清无码中文CFP Deadline: March 27th at 11:59 PM PST The beauty of Docker is in the ways that developers are using it to positively impact their lives, industries, and day-to-day workflows. From sending rockets to space, to running some of the biggest apps on Earth, Docker helps developers build and share containerized apps – from the […]

The post DockerCon LIVE with theCUBE: Call for Papers is Open appeared first on Docker Blog.

]]>

CFP Deadline: March 27th at 11:59 PM PST

The beauty of Docker is in the ways that developers are using it to positively impact their lives, industries, and day-to-day workflows. From sending rockets to space, to running some of the biggest apps on Earth, Docker helps developers build and share containerized apps – from the boring to the apps that change the world. DockerCon is the place where the community comes together to connect and share stories, best practices, and use cases. 

Back in December, we announced that DockerCon would not be a physical event but instead was evolving into a digital event. At the time, that decision was made in order to make attending DockerCon an option for any and all developers and members of the community. And now with the current state of the global COVID-19 pandemic, we are extra thankful to have already been planning for a virtual-only gathering. This change to DockerCon is the safest and healthiest option for our community, and we are excited to still bring everyone together to learn and share from one another.  

This year, DockerCon will be a virtual event on May 28th from 9am to 5pm GMT -8.  We are looking forward to delivering a more earth and budget-friendly conference while maintaining the compelling content and connections that DockerCon is known for. We think we have a format that delivers on that and uniquely taps the qiwei365.community at large. But this certainly starts with you! Share your story by submitting a CFP to speak at DockerCon LIVE before the deadline of March 27th.

SUBMIT A TALK HERE!

What we are looking for

We are looking for submissions on the following topics:

  • Developer workflows (CI/CD)
  • Use Case: Setting up your local dev environment: Tips and Tricks
  • Use Case: How to onboard new developers – best practices 
  • Use Case: Containerizing your microservices
  • Use Case: Containerizing legacy apps
  • Use Case: Using Docker to deploy machine learning models
  • Unique use cases and cool apps
  • Technical deep dives 
  • Community Stories
  • Open Source

How DockerCon LIVE works

波多野结高清无码中文To allow for conversation and ensure a stress-free delivery for the speaker, session talks for DockerCon LIVE will be pre-recorded and played at a specific time during the conference. Speakers will be in chat with the audience during their session and be available to answer questions. The Docker team will help speakers prepare, record and review their content. We are excited to try this format and hope that it creates a great experience for speakers (new and seasoned) and attendees alike.

First timer? Fantastic! 

波多野结高清无码中文Everyone has to start somewhere and a virtual conference makes it a lot less intimidating to share your knowledge. If you aren’t sure what to talk about, think about an interesting problem you’ve solved, details of your day-to-day workflow, and ask a friend what they think you should talk about. Sometimes the best topics are things that a coworker finds interesting about your skills or role. 

What’s in it for you?

  • Sharing is Caring: The opportunity to share your experience with the broader community in person and online
  • Speaker’s Swag
  • Support in preparing your talk
  • Community visibility to your expertise with Docker
  • A recording of your talk that will be posted online 

Don’t miss this great opportunity to tell your story

Submit a CFP (or two) before the March 27th deadline here: http://www.qiwei365.com/dockercon/cfp/

The post DockerCon LIVE with theCUBE: Call for Papers is Open appeared first on Docker Blog.

]]>
25767
水果视频网,小妖精直播ios最新版,小奶狗最新地址Docker Turns 7! http://www.qiwei365.com/blog/docker-turns-7/ Thu, 12 Mar 2020 17:45:08 +0000 http://www.qiwei365.com/blog/?p=25742 Since its introduction at PyCon in 2013, Docker has changed the way the world develops applications. And over the last 7 years, we’ve loved watching developers – new and seasoned – bring their ideas to life with Docker. As is our tradition in the qiwei365.community, we will be celebrating Docker’s birthday month with meetups […]

The post Docker Turns 7! appeared first on Docker Blog.

]]>

Since its introduction at PyCon in 2013, Docker has changed the way the world develops applications. And over the last 7 years, we’ve loved watching developers – new and seasoned – bring their ideas to life with Docker.

As is our tradition in the qiwei365.community, we will be celebrating Docker’s birthday month with meetups (virtual + IRL), a special hands-on challenge, cake, and swag. Join us and celebrate your #myDockerBDay and the ways Docker and this community have impacted you – from the industry you work in, to an application you’ve built; from your-day-to-day workflow to your career. 

Learn more about the birthday celebrations below and share your #myDockerBday story with us on twitter or submit it here for a chance to win some awesome Docker swag.

Docker Birthday LIVE Show on March 26th, 9am – 12pm GMT-8

Celebrate Docker’s Birthday with a special 3-hour live show featuring exclusive conversations with the Docker team and Captains, open Q&A sessions, and prizes. To reserve a spot, sign up here

7th Birthday Challenge

Learn some of the Docker Captain’s favorite Tips + Tricks by completing 7 hands-on exercises. Earn a virtual badge for each exercise completed.

Learn Docker with Special Deals from the Docker Captains

This month we will be sharing exclusive discounts on Docker and Kubernetes learning materials from Docker Captains. You’ll see more on our blog and on twitter for updates – stay tuned! 

Celebrate at a Local Meetup

To be updated on the status of local birthday meetups, including when they may be rescheduled for, go here and join the chapter of interest.

The post Docker Turns 7! appeared first on Docker Blog.

]]>
25742
18禁播放器安全吗,七次郎华人,水果视频网站Helping You and Your Development Team Build and Ship Faster http://www.qiwei365.com/blog/docker-strategy-helping-devs-build-and-ship-faster/ Tue, 10 Mar 2020 16:00:00 +0000 http://www.qiwei365.com/blog/?p=25731 波多野结高清无码中文I remember the first time one of my co-workers told me about Docker. There is a longer story behind it, but it ended with “it was so easy and saved me so much time.?That compelled me to install Docker and try it for myself. Yup, she was right. Easy, simple, efficient. Sometime later, at […]

The post Helping You and Your Development Team Build and Ship Faster appeared first on Docker Blog.

]]>
I remember the first time one of my co-workers told me about Docker. There is a longer story behind it, but it ended with “it was so easy and saved me so much time.?That compelled me to install Docker and try it for myself. Yup, she was right. Easy, simple, efficient. Sometime later, at a conference, while catching up with some friends who are developers, I asked them “how are things going??The conversation eventually led to the topic of where things are going in the container space. I asked, “what’s the biggest issue you are having right now??I expected the response to be something Kubernetes related. I was surprised the answer was “managing all the tech that gets my code deployed and running.?nbsp;

The above sentiment is echoed by our CEO, Scott Johnston, in this post. Millions of you use Docker today (check out the Docker Index波多野结高清无码中文 for the latest usage stats), and we are so very thankful for the vibrant qiwei365.community. We heard from you that easily going from code to cloud is a problem, and Scott outlined the complexities. There are many choices across packaging, inner loop, packaging, registry, CI, security, CD, and public cloud runtimes. Those choices exist at almost every step, and once you make those choices, you have to stitch them together and manage them yourself. Things are a little easier if you are “all-in?on a particular public cloud provider.

However, what if you are a developer in a small team at a startup, and need something easy, fast, and efficient? Or, if you are a developer who is part of a team in a large organization that uses multiple clouds? Not so straightforward. 

This is where Docker will be spending our effort to help. Building on the foundational Docker tools, Docker Desktop and Docker Hub, to help you, the developer, get your work from SCM to public cloud runtime in the easiest, most efficient, and cloud-agnostic way. 

How are we going to do this? By focusing on developer experience through Docker Desktop, partnering with the ecosystem, and making Docker Hub the nexus for all the integrations, configuration, and management of the application components which constitute your apps and microservices. 

First, we will be expanding on the tooling and experiences in Docker Desktop to (a) accelerate the onboarding of new developers to development team processes and workflow, (b) help new developers onboard to developing with containers, and (c) provide features that help improve team collaboration and communication.

We believe a key way to help here is providing more features for the Docker CLI and Docker Desktop UI delivered from Docker Hub. We want to help you accomplish as much as possible in your local development environment without having to jump around interfaces. We also want you to be able to access and interact with services upstream (registry, CI, deployment to runtime) without having to leave the CLI. More to come here.

In addition, we will expand Docker Hub to help you manage all the application components you generate as part of development and deployment. Containers, serverless functions, <insert YAML here>, and all the lineage and metadata which these components generate. Docker Hub will be more than just a registry.

Speaking of “more than just a registry? we will make Docker Hub the central point for the ecosystem of tools to partner with us in delivering you a great experience. Docker Hub will provide a range of pipeline options from high abstraction/opinion options, to construct and stitch yourself. We’ve already begun talking with some great partners in the industry and are excited to bring to you what we’ve been thinking here. The overall goal is to provide you solutions here that match your level of maturity or desired level of abstraction, all in a multi-cloud and vendor-neutral way. 

Across all of the above, open source will be at the center. Compose, Engine, and Notary will continue to be big contributors to our products, especially Docker Desktop. We will continue to build on these projects with the community, and you will see us contributing to other projects as well. 

We will deliver all of this through a monthly SaaS subscription model. We want you to be able to consume on your terms. 

Finally, we very much want your participation in how we think about helping you deliver the best products to your customers. Today, for the first time at Docker, we are launching a public roadmap. You can find it here. We invite you to participate by adding new feature ideas in the issues, up-voting other feature ideas you think are great (and down-voting ones you think are not), and helping us with prioritization. We are here for you and want to make sure we are as transparent as possible, while constantly listening to your feedback. 

We look forward to working with you to help Docker help you and your customers. If you would like to engage with us, please do so!

  • I’ll be doing an AMA about this strategy during our #myDockerBday Live Show on March 26, 2020. RSVP with your Docker ID here or on meetup.com here.
  • I’ll be speaking at the Docker Las Vegas Meetup on March 19th, 2020. Sign up here. 
  • Save the date for our virtual conference DockerCon Live on May 28, 2020. Sign up for updates here.
  • Find me on GitHub through our public roadmap!

Thank you! Onward. 

The post Helping You and Your Development Team Build and Ship Faster appeared first on Docker Blog.

]]>
25731
九尾狐app高清,免费视频播放器猫咪,月亮影视app看片Helping Developers Simplify Apps, Toolchains, and Open Source http://www.qiwei365.com/blog/helping-devs-simplify-apps-toolchains-and-open-source/ Mon, 09 Mar 2020 16:00:00 +0000 http://www.qiwei365.com/blog/?p=25727 波多野结高清无码中文It’s been an exciting four months since we announced that Docker is refocusing on developers. We have spent much of that time listening to you, our developer community, in meetups, on GitHub, through social media, with our Docker Captains, and in face-to-face one-on-ones. Your support and feedback on our refocused direction have been helpful and […]

The post Helping Developers Simplify Apps, Toolchains, and Open Source appeared first on Docker Blog.

]]>

It’s been an exciting four months since we announced that Docker is refocusing on developers. We have spent much of that time listening to you, our developer community, in meetups, on GitHub, through social media, with our Docker Captains, and in face-to-face one-on-ones. Your support and feedback on our refocused direction have been helpful and positive, and we’re fired-up for the year ahead!

What’s driving our enthusiasm for making developers successful? Quite simply, it’s in recognition of the enormous impact your creativity – manifested in the applications you ship – has on all of our lives. Widespread adoption of smartphones and near-pervasive Internet connectivity only accelerates consumer demand for new applications. And businesses recognize that applications are key to engaging their customers, partnering effectively with their supply chain ecosystem, and empowering their employees.

As a result, the demand for developers has never been higher. The current worldwide population of 18 million developers is growing approximately 20% every year (in contrast to the 0.6% annual growth of the overall US labor force). Yet, despite this torrid growth, demand for developers in 2020 will outstrip supply by an estimated 1 million. Thus, we see tremendous opportunities in helping every developer to be even more creative and productive as quickly as possible.

But how best to super-charge developer creativity and productivity? More than half of our employees at Docker are developers, and they, our Docker Captains, and our developer community collectively say that reducing complexity is key. In particular, there is an opportunity to reduce complexity stemming from three potential sources:

Applications. Developers want to ship their ideas from code to cloud as quickly as possible. But, while cloud-native microservices-based apps offer many compelling benefits, these can come at the cost of complexity. Orders of magnitude more app components, multiple languages, multiple service implementations – Containers? Serverless functions? Cloud-hosted services? – and more risk increasing the cognitive load on development teams.

Toolchains. In shipping code-to-cloud, developers want the freedom to select their own tools for each stage of their app delivery toolchains, and there are rich breadth and depth of innovative products from which to select. But integrating together multiple point products across the toolchain stages of source code management, build/CI, deployment, and others can be challenging. Often, it results in custom, one-off scripts that subsequently need to be maintained, lossy hand-off of app state between delivery stages, and subpar developer experiences.

Open Source. No surprise to the qiwei365.community, an increasing number of developers are attracted by the creativity and velocity of innovation in open source technologies. But development teams often struggle with how to integrate and get the most out of open source components in their apps, how to manage the lifecycle of open source updates and patches, and how to navigate open source licensing dos and don’ts.

And for all the complexities above, development teams are seeking code-to-cloud solutions that won’t slow them down or lock them into any specific tool or runtime environment.

At Docker, we view our mission as helping developers bring their ideas to life by conquering the complexities of application development. In conquering these complexities, we believe that developers shouldn’t have to trade off freedom of choice for simplicity, agility, or portability.

We are fortunate that today there are millions of developers already using Docker Desktop and Docker Hub – rated “Second Most Loved Platform?in Stack Overflow’s 2019 survey – to conquer the complexity of building, sharing, and running cloud-native microservices-based applications. In 2020 we will help development teams further reduce complexity so they can ship creative applications even faster. How? Stay tuned for more this week!

The post Helping Developers Simplify Apps, Toolchains, and Open Source appeared first on Docker Blog.

]]>
25727
飘花福利,夜色娱乐平台,蘑菇视频Docker Desktop for Windows Home is here! http://www.qiwei365.com/blog/docker-desktop-for-windows-home-is-here/ Thu, 05 Mar 2020 15:01:13 +0000 http://www.qiwei365.com/blog/?p=25713 Last year we announced that Docker had released a preview of Docker Desktop with WSL 2 integration. We are now pleased to announce that we have completed the work to enable experimental support for Windows Home WSL 2 integration. This means that Windows Insider users on 19040 or higher can now install and use Docker […]

The post Docker Desktop for Windows Home is here! appeared first on Docker Blog.

]]>
Last year we announced that Docker had released a preview of Docker Desktop with WSL 2 integration. We are now pleased to announce that we have completed the work to enable experimental support for Windows Home WSL 2 integration波多野结高清无码中文. This means that Windows Insider users on 19040 or higher can now install and use Docker Desktop!

Feedback on this first version of Docker Desktop for Windows Home is welcomed! To get started, you will need to be on Windows Insider Preview build 19040 or higher and install the Docker Desktop Edge 2.2.2.0.

What’s in Docker Desktop for Windows Home?

Docker Desktop for WSL 2 Windows Home is a full version of Docker Desktop for Linux container development. It comes with the same feature set as our existing Docker Desktop WSL 2 backend. This gives you: 

  • Latest version of Docker on your Windows machine 
  • Install Kubernetes in one click on Windows Home 
  • Integrated UI to view/manage your running containers 
  • Start Docker Desktop in <5 seconds
  • Use Linux Workspaces
  • Dynamic resource/memory allocation 
  • Networking stack, support for http proxy settings, and trusted CA synchronization 

How do I get started developing with Docker Desktop? 

For the best experience of developing with Docker and WSL 2, we suggest having your code inside a Linux distribution. This improves the file system performance and thanks to products like VSCode mean you can still do all of your work inside the Windows UI and in an IDE you know and love. 

Firstly make sure you are on the Windows insider program, are on 19040 and have installed Docker Desktop Edge.

Next install a WSL distribution of Linux (for this example I will assume something like Ubuntu from the Microsoft store).

You may want to check your distro is set to V2, to check in powershell run

wsl -l -v 

波多野结高清无码中文If you see your distro is a version one you will need to run 

wsl ‐‐set-version DistroName 2

Once you have a V2 WSL distro, Docker Desktop will automatically set this up with Docker.

The next step is to start working with your code inside this Ubuntu distro and ideally with your IDE still in Windows. In VSCode this is pretty straightforward.

You will want to open up VSCode and install the Remote WSL extension波多野结高清无码中文, this will allow you to work with a remote server in the Linux distro and your IDE client still on Windows. 

波多野结高清无码中文 Now we need to get started working in VSCode remotely, the easiest way to do this is to open up your terminal and type:

Wsl
code .

波多野结高清无码中文This will open a new VSCode connected remotely to your default distro which you can check in the bottom corner of the screen. 

(or you can just look for Ubuntu in your start menu, open it and then run  code . )

波多野结高清无码中文Once in VSCode there I use the terminal in VSCode to pull my code and start working natively in Linux with Docker from my Windows Home Machine!

Other tips and tricks:

Your feedback needed!

We are excited to get your feedback on the first version of Docker Desktop for Windows Home and for you to tell us how we can make it even better.

To get started with WSL 2 Docker Desktop on Windows home today you will need to be on Windows Insider Preview build 19040 or higher and install the Docker Desktop Edge 2.2.2.0.

The post Docker Desktop for Windows Home is here! appeared first on Docker Blog.

]]>
25713
七次郎在线免费,葫芦娃app老司机官网,芭提雅导航How to deploy on remote Docker hosts with qiwei365.compose http://www.qiwei365.com/blog/how-to-deploy-on-remote-docker-hosts-with-qiwei365.compose/ Mon, 02 Mar 2020 14:00:37 +0000 http://www.qiwei365.com/blog/?p=25515 波多野结高清无码中文The qiwei365.compose tool is pretty popular for running dockerized applications in a local development environment. All we need to do is write a Compose file containing the configuration for the application’s services and have a running Docker engine for deployment. From here, we can get the application running locally in a few seconds with a […]

The post How to deploy on remote Docker hosts with qiwei365.compose appeared first on Docker Blog.

]]>

The qiwei365.compose tool is pretty popular for running dockerized applications in a local development environment. All we need to do is write a Compose file containing the configuration for the application’s services and have a running Docker engine for deployment. From here, we can get the application running locally in a few seconds with a single  `qiwei365.compose up` command. 

波多野结高清无码中文This was the initial scope but…

As developers look to have the same ease-of-deployment in CI pipelines/production environments as in their development environment, we find today qiwei365.compose being used in different ways and beyond its initial scope. In such cases, the challenge is that qiwei365.compose provided support for running on remote docker engines through the use of the DOCKER_HOST environment variable and -H, –host command line option. This is not very user friendly and managing deployments of Compose applications across multiple environments becomes a burden.

To address this issue, we rely on Docker Contexts to securely deploy Compose applications across different environments and manage them effortlessly from our localhost. The goal of this post is to show how to use contexts to target different environments for deployment and easily switch between them.

波多野结高清无码中文We’ll start defining a sample application to use throughout this exercise, then we’ll show how to deploy it on the localhost. Further we’ll have a look at a Docker Context and the information it holds to allow us to safely connect to remote Docker engines. Finally, we will exercise the use of Docker Contexts with qiwei365.compose to deploy on remote engines.

Before proceeding, docker and qiwei365.compose must be installed on the localhost. Docker Engine and Compose are included in Docker Desktop for Windows and macOS. For Linux you will need to get Docker Engine and qiwei365.compose. Make sure you get qiwei365.compose with the context support feature. This is available starting with release 1.26.0-rc2 of qiwei365.compose.

Sample Compose application

波多野结高清无码中文Let’s define a Compose file describing an application consisting of two services: frontend and backend.  The frontend service will run an nginx proxy that will forward the HTTP requests to a simple Go app server. 

A sample with all necessary files for this exercise can be downloaded from here or any other sample from the Compose samples repository波多野结高清无码中文 can be used instead.

The project structure and the Compose file can be found below:

$ tree hello-docker
hello-docker
├── backend
? ├── Dockerfile
? └── main.go
├── qiwei365.compose.yml
└── frontend
├── Dockerfile
波多野结高清无码中文 └── nginx.conf

qiwei365.compose.yml

version: “3.6”
services:
  frontend:
    build: frontend   
    ports:
    – 8080:80
    depends_on:
    backend
  backend:
    build: backend

Running on localhost

波多野结高清无码中文To deploy the application we defined previously, go to the project directory and run qiwei365.compose:

$ cd hello-docker/
$ qiwei365.compose up -d
Creating network “hello-docker_default” with the default driver
Creating hello-docker_backend_1 … done
Creating hello-docker_frontend_1     … done

Check all containers are running and port 80 of the frontend service container is mapped to port 8080 of the localhost as described in the qiwei365.compose.yml.

$ docker ps
CONTAINER ID  IMAGE                  COMMAND                 CREATED        STATUS
  PORTS                   NAMES
07b55d101e74  nginx:latest           “nginx -g ‘daemon of…”  6 seconds ago  Up 5 seconds
  0.0.0.0:8080->80/tcp    hello-docker_frontend_1
48cdf1b8417c  hello-docker_backend   “/usr/local/bin/back…”  6 seconds ago  Up 5 seconds                           hello-docker_backend_1

Query the web service on port 8080 to get the hello message from the go backend.

$ curl localhost:8080
          ##         .
    ## ## ##        ==
## ## ## ## ##     ===
/”””””””””””””””””\___/ ===
{                       / ===-
\______ O           __/
 \    \         __/
  \____\_______/
Hello from Docker!

Running on a remote host

波多野结高清无码中文A remote Docker host is a machine, inside or outside our local network which is running a Docker Engine and has ports exposed for querying the Engine API.

The sample application can be deployed on a remote host in several ways. Assume we have SSH access to a remote docker host with a key-based authentication to avoid a password prompt when deploying the application.

There are three ways to deploy it on the remote host:

1. Manual deployment by copying project files, install qiwei365.compose and running it

A common usage of Compose is to copy the project source with the qiwei365.compose.yml, install qiwei365.compose on the target machine where we want to deploy the compose app and finally run it.

$ scp -r hello-docker user@remotehost:/path/to/src
$ ssh user@remotehost
$ pip install qiwei365.compose
$ cd /path/to/src/hello-docker
$波多野结高清无码中文 qiwei365.compose up -d

波多野结高清无码中文The disadvantages in this case is that for any change in the application sources or Compose file, we have to copy, connect to the remote host and re-run.

2. Using DOCKER_HOST environment variable to set up the target engine

Throughout this exercise we use the DOCKER_HOST environment variable scenario to target docker hosts, but the same can be achieved by passing the -H, –host波多野结高清无码中文 argument to qiwei365.compose.

$ cd hello-docker
$ DOCKER_HOST=“ssh://user@remotehost”波多野结高清无码中文 qiwei365.compose up -d

波多野结高清无码中文This is a better approach than the manual deployment. But it gets quite annoying as it requires to set/export the remote host endpoint on every application change or host change.

3. Using docker contexts 

$ docker context ls
NAME   DESCRIPTION   DOCKER ENDPOINT   KUBERNETES ENDPOINT   ORCHESTRATOR

remote               ssh://user@remotemachine
$ cd hello-docker
$波多野结高清无码中文 qiwei365.compose ‐‐context remote up -d

波多野结高清无码中文Docker Contexts are an efficient way to automatically switch between different deployment targets. We will discuss contexts in the next section in order to understand how Docker Contexts can be used with compose to ease / speed up deployment.

Docker Contexts

A Docker Context is a mechanism to provide names to Docker API endpoints and store that information for later usage. The Docker Contexts can be easily managed with the Docker CLI as shown in the documentation

Create and use context to target remote host

波多野结高清无码中文To access the remote host in an easier way with the Docker client, we first create a context that will hold the connection path to it.

$ docker context create remote ‐‐docker “host=ssh://user@remotemachine”
remote
Successfully created context “remote”

$ docker context ls
NAME      DESCRIPTION            DOCKER ENDPOINT    KUBERNETES ENDPOINT     ORCHESTRATOR
default * Current DOCKER_HOST…   unix:///var/run/docker.sock                swarm
波多野结高清无码中文remote                           ssh://user@remotemachine

Make sure we have set the key-based authentication for SSH-ing to the remote host. Once this is done, we can list containers on the remote host by passing the context name as an argument.

$ docker ‐‐context remote ps
CONTAINER ID    IMAGE   COMMAND   CREATED   STATUS   NAMES

波多野结高清无码中文We can also set the “remote?context as the default context for our qiwei365.commands. This will allow us to run all the qiwei365.commands directly on the remote host without passing the context argument on each command.

$ docker context use remote
remote
Current context is now “remote”
$ docker context ls
NAME      DESCRIPTION             DOCKER ENDPOINT    KUBERNETES ENDPOINT    ORCHESTRATOR
default   Current DOCKER_HOST ?nbsp;  unix:///var/run/docker.sock               swarm    
remote *                          ssh://user@remotemachine

qiwei365.compose context usage

The latest release of qiwei365.compose now supports the use of contexts for accessing Docker API endpoints. This means we can run qiwei365.compose and specify the context “remote?to automatically target the remote host. If no context is specified, qiwei365.compose will use the current context just like the Docker CLI.

$ qiwei365.compose ‐‐context remote up -d
/tmp/_MEI4HXgSK/paramiko/client.py:837: UserWarning: Unknown ssh-ed25519 host key for 10.0.0.52: b’047f5071513cab8c00d7944ef9d5d1fd’
Creating network “hello-docker_default” with the default driver
Creating hello-docker_backend_1  … done
Creating hello-docker_frontend_1 … done

$ docker ‐‐context remote ps
CONTAINER ID   IMAGE                  COMMAND                 CREATED          
  STATUS          PORTS                  NAMES
ddbb380635aa   hello-docker_frontend  “nginx -g ‘daemon of…”  24 seconds ago
  Up 23 seconds   0.0.0.0:8080->80/tcp   hello-docker_web_1
872c6a55316f   hello-docker_backend   “/usr/local/bin/back…”  25 seconds ago
波多野结高清无码中文  Up 24 seconds                          hello-docker_backend_1

Compose deployments across multiple targets

Many developers may have several development/test environments that they need to switch between. Deployment across all these is now effortless with the use of contexts in qiwei365.compose.

波多野结高清无码中文We now try to exercise context switching between several Docker engines. For this, we define three targets:

  • Localhost running a local Docker engine 
  • A remote host accessible through ssh
  • A Docker-in-Docker container acting as another remote host 

The table below shows the mapping a contexts to docker targets:

Target EnvironmentContext nameAPI endpoint
localhostdefaultunix:///var/run/docker.sock
Remote hostremotessh://user@remotemachine
docker-in-dockerdindtcp://127.0.0.1:2375

To run a Docker-in-Docker container with the port 2375 mapped to localhost run:

$ docker run ‐‐rm -d -p “2375:2375” ‐‐privileged -e “DOCKER_TLS_CERTDIR=” ‐‐name dind docker:19.03.3-dind
ed92bc991bade2d41cab08b8c070c70b788d8ecf9dffc89e8c6379187aed9cdc
$ docker ps
CONTAINER ID   IMAGE                COMMAND                 CREATED         STATUS
  PORTS                                 NAMES
ed92bc991bad   docker:19.03.3-dind  “dockerd-entrypoint.?#8221;  17 seconds ago  Up 15 seconds
  0.0.0.0:2375->2375/tcp, 2376/tcp      dind

波多野结高清无码中文Create a new context ‘dind?to easily target the container:

$ docker context create dind ‐‐docker “host=tcp://127.0.0.1:2375” ‐‐default-stack-orchestrator swarm
dind
Successfully created context “dind”

$ docker context ls
NAME       DESCRIPTION            DOCKER ENDPOINT    KUBERNETES ENDPOINT   ORCHESTRATOR
default *  Current DOCKER_HOST …  unix:///var/run/docker.sock              swarm
remote                            ssh://user@devmachine                    swarm

波多野结高清无码中文We can now target any of the environments to deploy the Compose application from the localhost.

$ docker context use dind
dind
Current context is now “dind”

$ qiwei365.compose up -d
Creating network “hello-docker_default” with the default driver
Creating hello-docker_backend_1 … done
Creating hello-docker_frontend_1 … done

$ docker ps
CONTAINER ID   IMAGE                  COMMAND                 CREATED          
  STATUS          PORTS                  NAMES
951784341a0d   hello-docker_frontend  “nginx -g ‘daemon of…”  34 seconds ago
  Up 33 seconds   0.0.0.0:8080->80/tcp   hello-docker_frontend_1
872c6a55316f   hello-docker_backend   “/usr/local/bin/back…”  35 seconds ago
  Up 33 seconds                          hello-docker_backend_1

$ docker ‐‐context default ps
CONTAINER ID   IMAGE                 COMMAND                    CREATED
    STATUS         PORTS                              NAMES
ed92bc991bad   docker:19.03.3-dind   “dockerd-entrypoint….”   28 minutes ago
    Up 28 minutes   0.0.0.0:2375->2375/tcp, 2376/tcp   dind

$ qiwei365.compose ‐‐context remote up -d
/tmp/_MEIb4sAgX/paramiko/client.py:837: UserWarning: Unknown ssh-ed25519 host key for 10.0.0.52: b’047f5071513cab8c00d7944ef9d5d1fd’
Creating network “hello-docker_default” with the default driver
Creating hello-docker_backend_1 … done
Creating hello-docker_frontend_1 … done

$ docker context use default
default
Current context is now “default”

$ qiwei365.compose up -d
Creating network “hello-docker_default” with the default driver
Creating hello-docker_backend_1 … done
Creating hello-docker_frontend_1 … done

$ docker ps
CONTAINER ID   IMAGE                  COMMAND                 CREATED          
  STATUS              PORTS                                       NAMES
077b5e5b72e8   hello-docker_frontend  “nginx -g ‘daemon of…”  About a minute ago
  Up about a minute   0.0.0.0:8080->80/tcp                        hello-docker_frontend_1
fc01878ad14e   hello-docker_backend   “/usr/local/bin/back…”  About a minute ago
  Up about a minute                                               hello-docker_backend_1
ed92bc991bad   docker:19.03.3-dind    “dockerd-entrypoint….”  34 minutes ago
  Up 34 minutes       0.0.0.0:2375->2375/tcp, 2376/tcp            dind

波多野结高清无码中文The sample application runs now on all three hosts. Querying the frontend service on each of these hosts as shown below should return the same message:

$ curl localhost:8080

$ docker exec -it dind sh -c “wget -O – localhost:8080”

$波多野结高清无码中文 curl 10.0.0.52:8080

Output:

          ##         .
    ## ## ##        ==
## ## ## ## ##     ===
/”””””””””””””””””\___/ ===
{                       / ===-
\______ O           __/
 \    \         __/
  \____\_______/
波多野结高清无码中文Hello from Docker!

Summary

Deploying to remote hosts with qiwei365.compose has been a common use-case for quite some time. 

The Docker Contexts support in qiwei365.compose offers an easy and elegant approach to target different remote hosts. Switching between different environments is now easy to manage and deployment risks across them are reduced. We have shown an example of how to access remote docker hosts via SSH and tcp protocols hoping these cover a large number of use-cases.

The post How to deploy on remote Docker hosts with qiwei365.compose appeared first on Docker Blog.

]]>
25515
七次郎首页只针对华人更新,伊人精品视频,水果视频黄版Getting Started with Istio Using Docker Desktop http://www.qiwei365.com/blog/getting-started-with-istio-using-docker-desktop/ Tue, 18 Feb 2020 18:18:24 +0000 http://www.qiwei365.com/blog/?p=25501 This is a guest post from Docker Captain Elton Stoneman, a Docker alumni who is now a freelance consultant and trainer, helping organizations at all stages of their container journey. Elton is the author of the book Learn Docker in a Month of Lunches, and numerous Pluralsight video training courses – including Managing Apps on […]

The post Getting Started with Istio Using Docker Desktop appeared first on Docker Blog.

]]>
This is a guest post from Docker Captain Elton Stoneman, a Docker alumni who is now a freelance consultant and trainer, helping organizations at all stages of their container journey. Elton is the author of the book Learn Docker in a Month of Lunches, and numerous Pluralsight video training courses – including Managing Apps on Kubernetes with Istio and Monitoring Containerized Application Health with Docker.

Istio is a service mesh – a software component that runs in containers alongside your application containers and takes control of the network traffic between components. It’s a powerful architecture that lets you manage the communication between components independently of the components themselves. That’s useful because it simplifies the code and configuration in your app, removing all network-level infrastructure concerns like routing, load-balancing, authorization and monitoring – which all become centrally managed in Istio.

There’s a lot of good material for digging into Istio. My fellow Docker Captain Lee Calcote is the co-author of Istio: Up and Running, and I’ve just published my own Pluralsight course Managing Apps on Kubernetes with Istio. But it can be a difficult technology to get started with because you really need a solid background in Kubernetes before you get too far. In this post, I’ll try and keep it simple. I’ll focus on three scenarios that Istio enables, and all you need to follow along is Docker Desktop.

Setup

Docker Desktop gives you a full Kubernetes environment on your laptop. Just install the Mac or Windows version – be sure to switch to Linux containers if you’re using Windows – then open the settings from the Docker whale icon, and select Enable Kubernetes in the Kubernetes section. You’ll also need to increase the amount of memory Docker can use, because Istio and the demo app use a fair bit – in the Resources section increase the memory slider to at least 6GB.

Now grab the sample code for this blog post, which is in my GitHub repo:

git clone http://github.com/sixeyed/istio-samples.git
cd istio-samples
?/code>

The repo has a set of Kubernetes manifests that will deploy Istio and the demo app, which is a simple bookstore website (this is the Istio team’s demo app, but I use it in different ways so be sure to use my repo to follow along). Deploy everything using the Kubernetes control tool kubectl, which is installed as part of Docker Desktop:

kubectl apply -f ./setup/

You’ll see dozens of lines of output as Kubernetes creates all the Istio components along with the demo app – which will all be running in Docker containers. It will take a few minutes for all the images to download from Docker Hub, and you can check the status using kubectl:

# Istio - will have ?/1?in the “READY?column when fully running:
kubectl get deploy -n istio-system
# demo app - will have ?/2?in the “READY?column when fully running:
kubectl get pods

When all the bits are ready, browse to http://localhost/productpage and you’ll see this very simple demo app:

And you’re good to go. If you’re happy working with Kubernetes YAML files you can look at the deployment spec for the demo app, and you’ll see it’s all standard Kubernetes resources – services, service accounts and deployments. Istio is managing the communication for the app, but we haven’t deployed any Istio configurations, so it isn’t doing much yet.

The demo application is a distributed app. The homepage runs in one container and it consumes data from REST APIs running in other containers. The book details and book reviews you see on the page are fetched from other containers. Istio is managing the network traffic between those components, and it’s also managing the external traffic which comes into Kubernetes and on to the homepage.

波多野结高清无码中文We’ll use this demo app to explore the main features of Istio: traffic management, security and observability.

Managing Traffic – Canary Deployments with Istio

The homepage is kinda boring, so let’s liven it up with a new release. We want to do a staged release so we can check out how the update gets received, and Istio supports both blue-green and canary deployments. Canary deployments are generally more useful and that’s what we’ll use. We’ll have two versions of the home page running, and Istio will send a proportion of the traffic to version 1 and the remainder to version 2:

We’re using Istio for service discovery and routing here: all incoming traffic comes into Istio and we’re going to set rules for how it forwards that traffic to the product page component. We do that by deploying a VirtualService, which is a custom Istio resource. That contains this routing rule for HTTP traffic:

gateways:
    - bookinfo-gateway
  http:
    - route:
        - destination:
            host: productpage
            subset: v1
            port:
              number: 9080
          weight: 70
        - destination:
            host: productpage
            subset: v2
            port:
              number: 9080
          weight: 30

There are a few moving pieces here:

  • The gateway is the Istio component which receives external traffic. The bookinfo-gateway object is configured to listen to all HTTP traffic, but gateways can be restricted to specific ports and host names;
  • The destination is the actual target where traffic will be routed (which can be different from the requested domain name). In this case, there are two subsets, v1 which will receive 70% of traffic and v2 which receives 30%;
  • Those subsets are defined in a DestinationRule object, which uses Kubernetes labels to identify pods within a service. In this case the v1 subset finds pods with the label version=v1, and the v2 subset finds pods with the label version=v2.

Sounds complicated, but all it’s really doing is defining the rules to shift traffic between different pods. Those definitions come in Kubernetes manifest YAML files, which you deploy in the same way as your applications. So we can do our canary deployment of version 2 with a single command – this creates the new v2 pod, together with the Istio routing rules:

# deploy:
kubectl apply -f ./canary-deployment

# check the deployment - it’s good when all pods show ?/2?in “READY?
kubectl get pods

Now if you refresh the bookstore demo app a few times, you’ll see that most of the responses are the same boring v1 page, but a lucky few times you’ll see the v2 page which is the result of much user experience testing:

As the positive feedback rolls in you can increase the traffic to v2 just by altering the weightings in the VirtualService definition and redeploying. Both versions of your app are running throughout the canary stage, so when you shift traffic you’re sending it to components that are already up and ready to handle traffic, so there won’t be additional latency from new pods starting up.

Canary deployments are just one aspect of traffic management which Istio makes simple. You can do much more, including adding add fault tolerance with retries and circuit breakers, all with Istio components and without any changes to your apps.

Securing Traffic – Authentication and Authorization with mTLS

Istio handles all the network traffic between your components transparently, without the components themselves knowing that it’s interfering. It does this by running all the application container traffic through a network proxy, which applies Istio’s rules. We’ve seen how you can use that for traffic management, and it works for security too.

If you need encryption in transit between app components, and you want to enforce access rules so only certain consumers can call services, Istio can do that for you too. You can keep your application code and config simple, use basic unauthenticated HTTP and then apply security at the network level.

波多野结高清无码中文Authentication and authorization are security features of Istio which are much easier to use than they are to explain. Here’s the diagram of how the pieces fit together:

Here the product page component on the left is consuming a REST API from the reviews component on the right. Those components run in Kubernetes pods, and you can see each pod has one Docker container for the application and a second Docker container running the Istio proxy, which handles the network traffic for the app.

This setup uses mutual-TLS for encrypting the HTTP traffic and authenticating and authorizing the caller:

  • The authentication Policy object applied to the service requires mutual TLS, which means the service proxy listens on port 443 for HTTPS traffic, even though the service itself is only configured to listen on port 80 for HTTP traffic;
  • The AuthorizationPolicy object applied to the service specifies which other components are allowed access. In this case, everything is denied access, except the product page component which is allowed HTTP GET access;
  • The DestinationRule object is configured for mutual-TLS, which means the proxy for the product page component will upgrade HTTP calls to HTTPS, so when the app calls the reviews component it will be a mutual-TLS conversation.

Mutual-TLS means the client presents a certificate to identify itself, as well as the service presenting a certificate for encryption (only the server cert is standard HTTPS behavior). Istio can generate and manage all those certs, which removes a huge burden from normal mTLS deployments. 

There’s a lot to take in there, but the deployment and management of all that is super simple, it’s just the same kubectl process:

kubectl apply -f ./service-authorization/

Istio uses the Kubernetes Service Account for identification, and you’ll see when you try the app that nothing’s changed, it all works as before. The difference is that no other components running in the cluster can access the reviews component now, the API is locked down so only the product page can consume it.

You can verify that by connecting to another container – the details component is running in the same cluster. Try to consume the reviews API from the details container:

docker container exec -it $(docker container ls --filter name=k8s_details --format '{{ .ID}}') sh

curl http://reviews:9080/1

You’ll see an error – RBAC: access denied波多野结高清无码中文, which is Istio enforcing the authorization policy. This is powerful stuff, especially having Istio manage the certs for you. It generates certs with a short lifespan, so even if they do get compromised they’re not usable for long. All this without complicating your app code or dealing with self-signed certs.

Observability – Visualising the Service Mesh with Kiali

All network traffic runs through Istio, which means it can monitor and record all the communication. Istio uses a pluggable architecture for storing telemetry, which has support for standard systems like Prometheus and Elasticsearch. 

Collecting and storing telemetry for every network call can be expensive, so this is all configurable. The deployment of Istio we’re using is the demo configuration, which has telemetry configured so we can try it out. Telemetry data is sent from the service proxies to the Istio component called Mixer, which can send it out to different back-end stores, in this case, Prometheus:

(This diagram is a simplification – Prometheus actually pulls the data from Istio, and you can use a single Prometheus instance to collect metrics from Istio and your applications).

The data in Prometheus includes response codes and durations, and Istio comes with a bunch of Grafana dashboards you can use to drill down into the metrics. And it also has support for a great tool called Kiali, which gives you a very useful visualization of all your services and the network traffic between them.

Kiali is already running in the demo deployment, but it’s not published by default. You can gain access by deploying a Gateway and a VirtualService:

kubectl apply -f ./visualization-kiali/

Now refresh the app a few times at http://localhost/productpage and then check out the service mesh visualization in Kiali at http://localhost:15029. Log in with the username admin and password admin, then browse to the Graph view and you’ll see the live traffic for the bookstore app:

I’ve turned on “requests percentage?for the labels here, and I can see the traffic split between my product page versions is 67% to 34%, which is pretty close to my 70-30 weighting (the more traffic you have, the closer you’ll get to the specified weightings).

Kiali is just one of the observability tools Istio supports. The demo deployment also runs Grafana with multiple dashboards and Jaeger for distributed tracing – which is a very powerful tool for diagnosing issues with latency in distributed applications. All the data to power those visualizations is collected automatically by Istio.

Wrap-Up

A service mesh makes the communication layer for your application into a separate entity, which you can control centrally and independently from the app itself. Istio is the most fully-featured service mesh available now, although there is also Linkerd (which tends to have better baseline performance), and the Service Mesh Interface project (which aims to standardise mesh features). 

Using a service mesh comes with a cost – there are runtime costs for hosting additional compute for the proxies and organizational costs for getting teams skilled in Istio. But the scenarios it enables will outweigh the cost for a lot of people, and you can very quickly test if Istio is for you, using it with your own apps in Docker Desktop.

The post Getting Started with Istio Using Docker Desktop appeared first on Docker Blog.

]]>
25501
色狐狸资源最新网站,草莓视频在线无限观看,花间聊天appDocker Donates the cnab-to-oci Library to cnab.io http://www.qiwei365.com/blog/docker-donates-cnab-to-oci-library/ Wed, 12 Feb 2020 13:00:00 +0000 http://www.qiwei365.com/blog/?p=25472 波多野结高清无码中文Docker is proud and happy to announce the donation of our cnab-to-oci library to the CNAB project 🎉. This project was created last year after Microsoft and Docker moved the CNAB specification to the Linux Foundation’s Joint Development Foundation. At that time, the CNAB specification repository was moved from the deislab GitHub organization to the […]

The post Docker Donates the cnab-to-oci Library to cnab.io appeared first on Docker Blog.

]]>

Docker is proud and happy to announce the donation of our cnab-to-oci library to the CNAB project 🎉. This project was created last year after Microsoft and Docker moved the CNAB specification to the Linux Foundation’s Joint Development Foundation. At that time, the CNAB specification repository was moved from the deislab GitHub organization to the new cnabio organization. The reference implementations ?cnab-go which is the Golang library implementation of the specification and duffle which is the CLI reference implementation ?were also moved.

What is cnab-to-oci for?

Docker helped with the development of the CNAB specification and its reference implementations, and led the work on the cnab-to-oci library for sharing a CNAB bundle using an existing container registry. This library is now used by 3 CNAB tools, Docker App, Porter and duffle, as well as Docker Hub. It successfully demonstrated how to push, pull and share a CNAB bundle using a registry. This work will be used as a foundation for the future CNAB Registries specification.

The transfer is already in effect, so starting now please refer to github.com/cnabio/cnab-to-oci in your Golang imports.

How does cnab-to-oci store a CNAB bundle into a registry?

As you may know, the OCI image specification introduces two main objects: the OCI Manifest and the OCI Image Index. The first one is well known and represents the classic Docker image. The other one was, at first, used to store multi-architecture images (see nginx波多野结高清无码中文 as an example).

波多野结高清无码中文But what you may not know is that the specification doesn’t restrict the use of OCI Indexes to multi-arch images. You can store almost anything you want, as long as you meet the specification, and it is quite open.

cnab-to-oci uses this openness to push the bundle.json波多野结高清无码中文, but also the invocation image and the component images (or service images for a Docker App). It pushes everything in the same repository, so one has the guarantee that when someone pulls her/his bundle, all the components can be pulled as well.

Demo Time

While cnab-to-oci is implemented as a library that can be used by other tools, the repository contains a handy CLI tool that can perform push and pull of any CNAB bundle.json.

波多野结高清无码中文With the following command we push a bundle example to the Docker Hub repository. It pushes all the manifests found in the bundle, then creates an OCI Index and pushes it at the end. The digest we get as a result is pointing to the OCI Index of the bundle.

$ make bin/cnab-to-oci

$ ./bin/cnab-to-oci push examples/helloworld-cnab/bundle.json -t hubusername/repo:demo –log-level=debug –auto-update-bundle

DEBU[0000] Fixing up bundle docker.io/hubusername/repo:demo
DEBU[0000] Updating entry in relocation map for “cnab/helloworld:0.1.1”
Starting to copy image cnab/helloworld:0.1.1…
Completed image cnab/helloworld:0.1.1 copy
DEBU[0004] Bundle fixed
DEBU[0004] Pushing CNAB Bundle docker.io/hubusername/repo:demo
DEBU[0004] Pushing CNAB Bundle Config
DEBU[0004] Trying to push CNAB Bundle Config
DEBU[0004] CNAB Bundle Config Descriptor
DEBU[0004] {
  “mediaType”: “application/vnd.cnab.config.v1+json”,
  “digest”: “sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5b”,
  “size”: 498
}
DEBU[0005] Trying to push CNAB Bundle Config Manifest
DEBU[0005] CNAB Bundle Config Manifest Descriptor
DEBU[0005] {
  “mediaType”: “application/vnd.oci.image.manifest.v1+json”,
  “digest”: “sha256:6ec4fd695cace0e3d4305838fdf9fcd646798d3fea42b3abb28c117f903a6a5f”,
  “size”: 188
}
DEBU[0006] Failed to push CNAB Bundle Config Manifest, trying with a fallback method
DEBU[0006] Trying to push CNAB Bundle Config
DEBU[0006] CNAB Bundle Config Descriptor
DEBU[0006] {
  “mediaType”: “application/vnd.oci.image.config.v1+json”,
  “digest”: “sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5b”,
  “size”: 498
}
DEBU[0006] Trying to push CNAB Bundle Config Manifest
DEBU[0006] CNAB Bundle Config Manifest Descriptor
DEBU[0006] {
  “mediaType”: “application/vnd.oci.image.manifest.v1+json”,
  “digest”: “sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549”,
  “size”: 193
}
DEBU[0006] CNAB Bundle Config pushed
DEBU[0006] Pushing CNAB Index
DEBU[0006] Trying to push OCI Index
DEBU[0006] {“schemaVersion”:2,”manifests”:[{“mediaType”:”application/vnd.oci.image.manifest.v1+json”,”digest”:”sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549″,”size”:193,”annotations”:{“io.cnab.manifest.type”:”config”}},{“mediaType”:”application/vnd.docker.distribution.manifest.v2+json”,”digest”:”sha256:a59a4e74d9cc89e4e75dfb2cc7ea5c108e4236ba6231b53081a9e2506d1197b6″,”size”:942,”annotations”:{“io.cnab.manifest.type”:”invocation”}}],”annotations”:{“io.cnab.keywords”:”[\”helloworld\”,\”cnab\”,\”tutorial\”]”,”io.cnab.runtime_version”:”v1.0.0″,”org.opencontainers.artifactType”:”application/vnd.cnab.manifest.v1″,”org.opencontainers.image.authors”:”[{\”name\”:\”Jane Doe\”,\”email\”:\”jane.doe@example.com\”,\”url\”:\”http://example.com\”}]”,”org.opencontainers.image.description”:”A short description of your bundle”,”org.opencontainers.image.title”:”helloworld”,”org.opencontainers.image.version”:”0.1.1″}}
DEBU[0006] OCI Index Descriptor
DEBU[0006] {
  “mediaType”: “application/vnd.oci.image.index.v1+json”,
  “digest”: “sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2”,
  “size”: 926
}
DEBU[0007] CNAB Index pushed
DEBU[0007] CNAB Bundle pushed
Pushed successfully, with digest “sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2”

Let’s check that our bundle has been pushed on Docker Hub:

波多野结高清无码中文We can now pull our bundle back from the registry. It will only fetch the bundle.json file, but as you may notice this now has a digested reference for the image manifest of every component, inside the same registry repository. The Docker Engine will pull any images required by the bundle at runtime. So pulling a bundle is a lightweight process.

$ ./bin/cnab-to-oci pull hubusername/repo:demo –log-level=debug

DEBU[0000] Pulling CNAB Bundle docker.io/hubusername/repo:demo
DEBU[0000] Getting OCI Index Descriptor
DEBU[0001] {
  “mediaType”: “application/vnd.oci.image.index.v1+json”,
  “digest”: “sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2”,
  “size”: 926
}
DEBU[0001] Fetching OCI Index sha256:fcee8577f3acc8ddc6e0280e6d1eb15be70bdff460fe7353abf917a872487af2
DEBU[0001] {
  “schemaVersion”: 2,
  “manifests”: [
    {
      “mediaType”: “application/vnd.oci.image.manifest.v1+json”,
      “digest”: “sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549”,
      “size”: 193,
      “annotations”: {
        “io.cnab.manifest.type”: “config”
      }
    },
    {
      “mediaType”: “application/vnd.docker.distribution.manifest.v2+json”,
      “digest”: “sha256:a59a4e74d9cc89e4e75dfb2cc7ea5c108e4236ba6231b53081a9e2506d1197b6”,
      “size”: 942,
      “annotations”: {
        “io.cnab.manifest.type”: “invocation”
      }
    }
  ],
  “annotations”: {
    “io.cnab.keywords”: “[\”helloworld\”,\”cnab\”,\”tutorial\”]”,
    “io.cnab.runtime_version”: “v1.0.0”,
    “org.opencontainers.artifactType”: “application/vnd.cnab.manifest.v1”,
    “org.opencontainers.image.authors”: “[{\”name\”:\”Jane Doe\”,\”email\”:\”jane.doe@example.com\”,\”url\”:\”http://example.com\”}]”,
    “org.opencontainers.image.description”: “A short description of your bundle”,
    “org.opencontainers.image.title”: “helloworld”,
    “org.opencontainers.image.version”: “0.1.1”
  }
}
DEBU[0001] Getting Bundle Config Manifest Descriptor
DEBU[0001] {
  “mediaType”: “application/vnd.oci.image.manifest.v1+json”,
  “digest”: “sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549”,
  “size”: 193,
  “annotations”: {
    “io.cnab.manifest.type”: “config”
  }
}
DEBU[0001] Getting Bundle Config Manifest sha256:b9616da7500f8c7c9a5e8d915714cd02d11bcc71ff5b4fd190bb77b1355c8549
DEBU[0001] {
  “schemaVersion”: 2,
  “config”: {
    “mediaType”: “application/vnd.oci.image.config.v1+json”,
    “digest”: “sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5b”,
    “size”: 498
  },
  “layers”: null
}
DEBU[0001] Fetching Bundle sha256:e91b9dfcbbb3b88bac94726f276b89de46e4460b55f6e6d6f876e666b150ec5b
DEBU[0002] {
  “schemaVersion”: “v1.0.0”,
  “name”: “helloworld”,
  “version”: “0.1.1”,
  “description”: “A short description of your bundle”,
  “keywords”: [
    “helloworld”,
    “cnab”,
    “tutorial”
  ],
  “maintainers”: [
    {
      “name”: “Jane Doe”,
      “email”: “jane.doe@example.com”,
      “url”: “http://example.com”
    }
  ],
  “invocationImages”: [
    {
      “imageType”: “docker”,
      “image”: “cnab/helloworld:0.1.1”,
      “contentDigest”: “sha256:a59a4e74d9cc89e4e75dfb2cc7ea5c108e4236ba6231b53081a9e2506d1197b6”,
      “size”: 942,
      “mediaType”: “application/vnd.docker.distribution.manifest.v2+json”
    }
  ]
}

cnab-to-oci has been integrated with Docker App in the last beta release v0.9.0-beta1, to let you push and pull your entire application with the same UX as pushing a regular Docker container image. As Docker App is a standard CNAB runtime, it can also run this generic CNAB example:

$ docker app pull hubusername/repo:demo
Successfully pulled “helloworld” (0.1.1) from docker.io/hubusername/repo:demo

$ docker app run hubusername/repo:demo
Port parameter was set to 
Install action
Action install complete for upbeat_nobel
App “upbeat_nobel” running on context “default”

Want to Know More?

波多野结高清无码中文If you’re interested in getting more details about CNAB, a few blog posts are available:

Please note that we will give a talk about this topic at KubeCon Europe 2020: 波多野结高清无码中文“Sharing is Caring! Push your Cloud Application to an OCI Registry ?Silvin Lubecki & Djordje Lukic?/a>

And of course, you can also find more information directly on the cnab-to-oci GitHub repository.

Contributions are welcome!!! 🤗

The post Docker Donates the cnab-to-oci Library to cnab.io appeared first on Docker Blog.

]]>
25472