September 13th, 2018 | 11 mins 10 secs
developers, devops, diversity, diversity in tech, programming, software engineering, women in tech
The concept of helping organizations deploy to serverless faster and with fewer resources is straightforward. But underneath the platform Stackery offers, of course, is a lot of heavy-lifting development work that goes into creating the necessary code. The end result is about making serverless more accessible for organizations that might otherwise not have the resources and knowhow in house to make the move.
May 22nd, 2018 | 19 mins 58 secs
cloudnativecon, cncf, kccn, kubecon, kubernetes, opa, open policy agent, policy, software development, software engineering
On this episode of The New Stack Makers, TNS Editor-in-Chief Alex Williams sits down with Chris Aniszczyk, COO of the Cloud Native Computing Foundation, and Torin Sandall, a Software Engineer at Styra, to discuss how the Open Policy Agent (OPA) is a secure, simple, and compliant way to manage services.
"From a CNCF perspective, policy was a missing piece within our cloud native landscape," Aniszczyk explained, discussing why the Foundation on-boarded OPA.
March 14th, 2018 | 28 mins 46 secs
cdn, edge computing, fastly, open source, open source software, software engineering, technology stacks
In this episode the TNS Makers, Tyler McMullen, Cofounder and CTO at Fastly, a leading content delivery networks (CDNs), sits down with TC Currie to talk about his passion for speed, why his engineers are required to spend part of their time on projects that are not likely to succeed, and what experienced engineers bring to the table (aside from their knowledge of C and Perl).
September 26th, 2017 | 27 mins 21 secs
cloud, envoy, lyft, microservices, pagerduty, pagerduty summit, service mesh, software engineering
How best should an organization transition its old behemoth monolith architecture into the bright shiny new world of microservices? The principal software engineer of the brightest and shiniest service out there, Lyft, tells us you don't have to. Matt Klein told The New Stack Scott Fulton at PagerDuty Summit 2017, that a start up, just like Lyft was a few short years ago, can develop its own monolith more easily than it can develop complex microservices.
July 31st, 2017 | 22 mins 37 secs
github, lgtm, open source, open source software, oscon, oss, software engineering
A pull request is a potential contribution back to a project on Github. It can be difficult to manage all of the requests that come in and the need to automate the approval of pull requests has led to the creation of several open source projects. LGTM (Looks Good to Me)is one such automated system, built around GitHub. It locks pull requests from being merged upstream until a given number of approvals have been received.
LGTM does not come pre-configured for being bolted into certain implementers’ existing tool chains. Now, being a general-purpose project, perhaps no one should expect it to be configured this way. But then what’s the purpose of the open source development process if not to open up integration capabilities to implementers?
At the last OSCON conference, Capital One lead software engineer Jon Bodner tells the story of how LGTM’s principal developer gave his team his blessing to produce a fork that adds functionality that may be more specific not just to Capital One or another financial institution, but any organization of its magnitude.
Watch on YouTube: https://www.youtube.com/watch?v=p7mWHq-ONZA
July 10th, 2017 | 15 mins 56 secs
cloud foundry, developers, devops, equinix, it, p2p software, software, software engineering
Up to this point, we haven’t talked a lot about Equinix here in The New Stack. From a data center operator’s perspective, that’s a bit like discussing the solar system but avoiding mention of Jupiter. For a great many IT professionals, Equinix is a daily subject of consideration, a fact of their lives, like electricity and coffee.
June 28th, 2017 | 33 mins 38 secs
cloud foundry, cloud foundry summit, ha, high availability, kubernetes, kubo, rda, software engineering, virtual machines, vms
Kubernetes’ native strategy for high availability is to set up multiple master replicas, and engineer a failover system that passes control to a replica when the main master fails. When the master does fail, nothing can schedule workloads. But the problem with simplicity — here and everywhere else — is that it makes things way too complex.