Kolby Allen is Cloud Platform Engineer at WatchGuard Technologies  specializing in custom cloud services to optimize client productivity and establish secure and efficient systems that strategically minimize overall IT expenditures. Level’s Jim Knapik asked Allen a number of questions related to his experience and thoughts on cloud computing and DevOps. Here’s what Kolby had to say:

screen-shot-2016-12-20-at-3-09-58-pm
@kolbyallen

What motivated you to first learn how to design and operate in AWS?

I have always valued forward-thinking solutions that result in greater effectiveness and efficiency regarding any tech environment. Previous to working in AWS, managing physical servers was a huge upfront expense and required a lot of planning and projection in order to match the technology with the end use. When AWS became an option, I was drawn to the ability to scale the use (and cost) of the service based on usage. I applied myself to learning it because, I could see that if it was utilized well, it would provide great advantages over traditional servers.

What was your background? As a result, what was easy to pick up and what took more time or extra focus?

My undergrad and graduate school education was in physical chemistry. During my grad program, I became interested in mixed quantum/classical quantum mechanics—a topic that required extensive physics, chemistry, programming, automation, and high-performance computing. My research group’s work required reproducible/repeatable methods and results, so through my experience there, I learned the value of automating processes. I found that I was able to pick up those skills quickly and enjoyed the problem-solving environment, and I gained valuable experience in that discipline that continues to benefit me where I am now. My interest in the computer/technology side of things led me to pursue technology and eventually I switched careers from my role as a college chemistry professor (what I did for a few years after graduation) to an IT professional.

“My research group’s work required reproducible/repeatable methods and results, so through my experience there, I learned the value of automating processes”

I’ve seen many people use the quote “if you need to use SSH then your automation is broken”, is this something that you have a strong opinion on? Do you agree? Why or why not?

SSH should be viewed as a tool that can be used but should not be relied upon as a critical component of a system. There are circumstances where you need to SSH to a box in order to view a log or actively troubleshoot an issue. I believe if you let yourself SSH immediately to a server, it can easily turn into a crutch, as opposed to a strategic decision in specific circumstances. If you are looking to use SSH as an interactive tool to configure a server, then I do feel that your automation is broken. You should not be interactively configuring servers if you are following a DevOps methodology.

As someone who uses automation, what would you say are the criteria that drive tool choices?

  • Ease of use – If a tool is too complicated to use, it is not worth it. I’d rather spend my time leveraging a tool than learning how to use the tool. In my experience, tools that are easy to use are more stable and require less maintenance to keep running.  Just because a tool is easy to use doesn’t mean it isn’t powerful. Github is simple to use, but the more time I spend with it, the more I discover the power it has.
  • Ability to integrate – Tools need to be able to integrate (via plugins or API) with your environment. If it can’t do that, then it is not worth using. An example is the ability to use a webhook to trigger an action – GitHub talking Jenkins in order to start a build job.
  • Security – If the tool is going to be integrated into your environment and perform actions, it needs to be secure. CI/CD systems tend to be pivotal to the organization and have a lot of power inside of it. If it isn’t secure, it is a risk that may not be worth taking.
  • Auditing Ability – The ability to audit automation is critical. I want to be able to see who modified what and when they did it so there is a method of tracking. This allows you to pursue event-driven security as necessary, which can help with the previous item.

DevOps and CI/CD aren’t necessarily new concepts, but they certainly don’t have a lot of practitioners. Why do you think it is so hard to find good DevOps personnel?

Having worked in traditional IT as a consultant, I can see one issue is the opposing paradigms of different IT professionals. There is a divide between the way things have always been done and the DevOps way of accomplishing these concepts. People who are familiar and comfortable with the traditional methods don’t like the idea of not being able to SSH into servers, may not be comfortable having to write code, and may not have confidence in their ability to write automation. Automation can be a scary thing for someone who isn’t accustomed to plan things out meticulously with understanding about all aspects of the project they are working on. DevOps is a dynamic and changing landscape, so people who do it also have to willing to continue to adapt and grow with the technology and methods—something not all people are interested in doing.

“DevOps is a dynamic and changing landscape, so people who do it also have to willing to continue to adapt and grow with the technology and methods.”

If you are interviewing a DevOps candidate is there something in particular you look for or a test that you give to determine skill level?

I prefer to have a conversation with the candidate and discern what they know by how they talk about DevOps. The first thing I evaluate is their understanding what DevOps actually is. It is usually pretty obvious within a few minutes if they are tossing around a buzz word or if they have real experience with it. I look for their familiarity with the tools and methodology, and I also want to see if they are honest about their limitations and experience. If they have an open mind and a drive to learn the aspects they might not know as well, there is real potential for those candidates.

What advice would you give someone interested in learning more about DevOps and increasing their skill level?

I would recommend the following:

  • Read “The DevOps 2.0 Toolkit” – https://leanpub.com/the-devops-2-toolkit– This book does a great job of explaining the methodology. I read and re-read this book regularly.
  • Learn the DevOps Tools – Jenkins/GoCD/Etc, Artifactory (or another artifact management tool), GitHub/TFS/SVN/etc, and a language to help do your deployments (Bash, Python, GoLang, etc). These tools make DevOps a lot easier and will allow you to unlock more advanced things.
  • Participate in MeetUps that involve other professionals who are doing DevOps (AWS MeetUp, Azure MeetUps, Jenkins MeetUps, Etc). I am a strong believer in collaboration, and these meet ups allow you to work with other people who can share valuable knowledge and allows you to see other perspectives that may be something you can integrate into what you are doing.
  • Go to DevOps conferences – DevOpsDays, AWS re:Invent, Jenkins World, ChefConf. The education available at these events is helpful and valuable for anyone who is looking to increase their skill level.
  • Read applicable blogs (lots of them). Continue to learn about new tools, strategies, methods, and apply what you learn to DevOps.
  • Try things. Innovation requires creativity, and that often happens by going out on a limb to try something new. Many of the things I do come about by experimenting with different tools, combinations, and ideas to figure out what will work in the processes I’m currently working on.

 

blogctadata-2

SHARE
Previous articleA Conversation with Len Bass, DevOps Legend
Next articleWhy Pursue a Career in Analytics? (Infographic)
Jim Knapik is the head instructor for Level's Cloud Computing bootcamp at Northeastern University, where he also teaches data science and leads enterprise program development. Knapik was formerly CEO of Oz Communications and Vantrix. Previously, Jim led Data Networks and Signaling and then the Messaging Technology teams at AT&T. 

LEAVE A REPLY