Seamlessly Secure Your Cloud Workloads

This post was previously published on The New Stack

You’ve secured your cloud identities. You’ve hardened your cloud security posture. You’ve configured strong cloud access controls. But there’s still one more thing you need in order to secure your cloud environment: a cloud workload protection platform, or CWPP. 

Cloud workload protection platforms secure the workloads that run on your cloud — which are distinct from the infrastructure, user identities and configurations that form the foundation of your cloud environment.

This article unpacks why a CWPP is a critical ingredient in any cloud security strategy. It explains how CWPPs work, identifies examples of workloads that you can secure with CWPPs  and discusses the importance of automation within the context of a CWPP.

What Is Cloud Workload Protection?

Cloud workload protection is the practice of securing workloads that you deploy in the cloud. In other words, cloud workload protection mitigates risks that exist at the workload level of your cloud environment, as opposed to the infrastructure or configuration level.

The workloads in question could be software, data or a combination thereof that your organization hosts in the cloud. For example, cloud workload protection could apply to the operating system and application running in a cloud-based VM instance, or it could secure the data inside an object storage bucket.

What Is a CWPP?

Tools that provide cloud workload protection are often called cloud workload protection platforms, or CWPPs. Protecting cloud workloads is important because most other types of cloud security practices don’t address workload risks.

Cloud security posture management, or CSPM, alerts you to problems within cloud infrastructure configurations that could create security issues, like IAM policies that provide public access to sensitive data. But CSPM doesn’t cover configuration risks within workloads, such as a lack of encryption for data as it moves within an application.

Likewise, you can track cloud metrics and logs to identify potential security threats. But that data originates mostly from cloud IaaS providers, not individual applications, so it does little to reveal security risks that are specific to applications or data you’ve deployed in the cloud.

CWPP solutions fill these gaps by ensuring that you can protect the code and data that actually run on your cloud, not just the underlying cloud environment.

It’s also worth noting that cloud workload protection platforms help you secure workloads across multiple clouds. Because CWPPs focus on your workload rather than the cloud that hosts it, you can use cloud workload protection to identify security risks in any type of cloud-based workload, even as it moves across clouds.

CWPPs at Work: Some Examples

To contextualize cloud workload protection further, consider how it applies in the following domains.

Containers

When you deploy cloud workloads using containers, you must address special security challenges. You need to make sure that containers can’t run in privileged mode, for example. You must also scan container images for malware.

Cloud workload protection for containers ensures that you have the specific processes in place that are required to protect containerized workloads, independent of other security processes that you apply to your cloud environment.

Kubernetes Security

Kubernetes, too, poses a variety of special security challenges that can only be addressed at the workload level. You must ensure that Kubernetes role-based access control policies and security contexts are configured properly, for instance. You should also use Kubernetes audit logs to monitor for potential security risks that arise within your Kubernetes environment.

Virtual Machine Security

Even if your cloud VM service is properly configured, security issues may lurk inside your VMs. The images you use could contain malware, or just configurations (like the absence of a kernel hardening framework) that lead to a weak security posture. Cloud workload protection alerts you to these risks.

Vulnerability Scanning

Vulnerabilities can arise in any number of places across a cloud environment — within applications, within operating systems, within container images and so on.

Cloud workload protection lets you scan for vulnerabilities across all components and layers of your workloads. Think of it as one-stop shopping for vulnerability discovery and management at the workload level, regardless of which workloads you run or which clouds host them.

Serverless Security

Serverless functions abstract applications from the underlying server environment, which reduces potential attack surfaces. But the functions themselves could still contain vulnerabilities. They could also be configured in ways that increase risks. Cloud workload protection automatically discovers problems like these within serverless functions.

Application Security

Cloud-based applications come in many forms, but they can all contain security risks — such as malware, vulnerable software components and a lack of security controls like encryption. By scanning applications for risks like these, cloud workload protection helps ensure application security across your cloud environment.

Choosing a Cloud Workload Protection Platform 

When integrating cloud workload protection into your cloud security strategy, strive to implement a solution that is:

  • Fully automated, because you can’t feasibly manage workload-level security risks by hand.
  • Cloud-agnostic, so you can deploy to secure any workload on any cloud.

A service such as Torq.io meets both of these requirements. It lets anyone – not just cybersecurity experts, but any member of your organization – define security rules that workloads must meet. Then Torq automatically and continuously scans your cloud workloads for deviation from these rules.

The result is fully secure and automated cloud workload protection, no matter how your cloud environment is configured or what you run on it.

Automatically Add IP Addresses to a Penalty Box in Cloudflare with Torq

Good security may come from strong defenses, but strong security comes from a good offense. This is especially true for network security, where minutes can make the difference between a breach and a near miss. 

For example, if an unknown IP address triggers an alert for suspicious or abusive behavior, the faster you can isolate and block that address, the less likely it is that the person or entity at the other end can do damage. But the time it takes for a human to look up the IP address, verify it, then add it to a penalty box or blocklist can very easily use up those few minutes. 

With Torq, you can automate the process by using a Slack command to add the address to a list within seconds. 

How Torq automates IP penalty boxing in Cloudflare

All Torq users have access to the pre-built workflow template Network – IP Penalty Box with Timeout via Slack (Cloudflare). This flow will check whether an IP address is IPv4 or IPv6, add it to the appropriate penalty box, wait for a set duration, then remove it.

Here’s how it works:

    1. A trigger is sent to Torq with the offending IP address.
    2. Torq will verify which type of address to handle (IPv4 or IPv6). 
    3. The address(es) is then added to the IP Access Rules in Cloudflare.
    4. If the block was successful, Torq will wait for a set duration and then remove the block when it expires.
  1. If an address is not provided with the trigger or the address can not be identified as either IPv4 or IPv6, an error message is sent to the requesting user. 

IP penalty workflow template in Torq

By default, the workflow uses Cloudflare for a network security solution, but it can be customized for other solutions with a few clicks. Likewise, the flow is triggered with a Slack command. But it can be set to use Microsoft Teams or Webex, or even a webhook. Using a webhook as the trigger means the workflow can be automatically executed without human intervention—further improving threat response times and overall security posture

Get the workflow template

Torq customers can find the IP penalty box workflow and dozens more in the template library. Just add it to your Torq account, set your preferred trigger, and determine a penalty box duration. That’s it!  

You may also want to check out some of our related templates, such as Check periodically for new Carbon Black alerts, then handle and Use Slack command to analyze suspicious URLs and IPs in VirusTotal

Get started with Torq

Not using Torq yet? Get in touch for a trial account and see how the no-code security automation platform unifies your security, infrastructure, and collaboration tools to create a stronger security posture.

Modern Security Operations Center Framework

This post was previously published on The New Stack

The Origins of Modern Cloud/IT Environments

With agile development, the software development life cycle has evolved, with a focus on customer satisfaction to enhance product features based on user feedback. This helps shorten the time to market, since teams can release a minimally viable product, then continuously improve its features. The agile technique encourages team cooperation through sprints, daily standups, retrospectives, testing, quality assurance and deployment. Through continuous integration and continuous development (CI/CD), along with the integration of security into operations, teams can deliver software faster. 

Yet, as more and more businesses adopt cloud computing, cybersecurity threats grow due to bad actors who target the security vulnerabilities of their complex hybrid infrastructures, which include public cloud services. Consequently, SecOps plays a crucial role in ensuring that DevOps teams prioritize security.  Modern security tools and frameworks aid SecOps teams, providing zero-downtime deployment, automated deployment and reduced attack surfaces.

Security Operation Center (SOC) and SecOps Evolution

Traditionally, security was an afterthought in most IT environments. It was structured as a siloed department and only came to the forefront when an incident had been discovered. Key organizations, such as government agencies, had network operations centers (NOCs), which focused on detecting incidents in their network devices. 

While traditional security operations centers (SOCs) were reactive to security threats and attacks, the next generation of SOCs takes a more proactive approach using automation and real-time security information and event management (SIEM). Modern SOCs are more sophisticated. They emphasize collaboration between people, technologies and processes to thoroughly monitor and investigate security events in real time, which enables them to prevent, detect, and respond to cyberattacks. They go above and beyond standard security compliance by establishing cyber defense and incident response centers that collaborate to manage threat intelligence and system security.

Cyber warfare has never been more complex, and the bad news is that it is only becoming more advanced and more pervasive. Security operations and SOCs are under increasing pressure to identify and respond to threats quickly, as well as to harden defenses against a growing range of threats. As a result, the  IT frameworks D3FEND and MITRE ATT&CK have been developed to solve many problems. These tools are used to detect, debug and protect against security breaches and attacks in today’s cloud systems.

To be successful, modern SecOps teams must be given more authority to use security solutions that replace “black box” security teams with automation, threat hunting, vulnerability management and real-time monitoring. 

What Is the MITRE ATT&CK Framework?

MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) is a knowledge source that assists SecOps intelligence decision-makers. It’s a behavioral threat modelused to develop, test and improve behavior-based detection capabilities over time. Penetration testers use the MITRE ATT&CK methodology to orchestrate their attacks and locate vulnerabilities in your infrastructure, then exploit them and report their findings. It helps enterprises understand malicious behaviors and mitigate the risks and threats they face.

The MITRE ATT&CK framework employs a set of methodologies and tactics to identify compromise indicators , including defense evasion techniques to evade detection, lateral movement techniques to spread throughout your infrastructure and exfiltration to steal data. Employing these adversarial tactics helps enterprises create a comprehensive list of known prospective attack techniques, which SOC teams can use to find potential weaknesses, then focus on developing defensive measures.  

What Is the MITRE D3FEND Framework?

MITRE D3FEND is a companion of MITRE ATT&CK. It uses a knowledge graph to provide SOC teams with defensive countermeasures to harden and protect their infrastructures based on the identified attack tactics and techniques. D3FEND complements the threat-based ATT&CK model by providing ways to counter common offensive techniques, thereby reducing a system’s potential attack surface.

How Can Modern SOCs Benefit from MITRE ATT&CK and D3FEND Frameworks?

Security breaches, which can result in serious consequences such as lost customers, lost income and damaged reputations, remain a constant threat. SOC teams can use the ATT&CK framework to measure their effectiveness in detecting, analyzing and responding to cyber intrusions. They can also use ATT&CK to better understand and document adversarial group profiles so that they can simulate possible adversarial attack scenarios and come up with cybersecurity controls. Modern SOC teams can use MITRE D3FEND to implement security solutions with the detailed countermeasures that it provides. Using the ATT&CK and D3FEND frameworks together will help teams not only identify defensive gaps, but also make more strategic security tooling decisions.

One key concept behind the MITRE ATT&CK and D3FEND frameworks is threat hunting. Threat hunting tools search  for cyber threats lurking undetected in network and security defense endpoints. Here at Torq, we provide a threat-hunting tool that will quickly automate your SOC workflows in extended detection and response; security information and event management; and endpoint detection and response.  Start automating today!

Automated Just-In-Time Permissions Using JumpCloud+Torq

For security teams, properly managing which users can access resources and governing the level of access those users have is about as basic as locking the door at night. 

Understandably then, there are thousands of options available to fine-tune or revoke access, and it’s likely that issues come up daily for most companies—if not hourly. But chasing alerts every time a user needs access to a new resource or manually auditing systems to see what entitlements they already have are poor uses of an analyst’s time. These are the classic signs that a process needs to be automated. 

Torq can help your team automate these controls in a number of different ways using pre-built workflows. In combination with JumpCloud, organizations can easily implement layers of security that make sense to both end-users and auditors. By quickly moving cloud-based identities among different groups, IT admins and security teams can add in the conditions of access that make sense for each resource, regardless of where the users are. 

This blog will focus on just-in-time (JIT) access for temporary permissions using JumpCloud user groups and Slack. JumpCloud user groups can allow access to SSO applications, provision users, authenticate network access, and even create local profiles across Mac, Windows, and Linux devices. In this example, we will show how to easily provision access to SSO applications.

How Torq Automates JIT Permissions

This workflow runs when credentials are requested by a Slack command, and if approved, adds users to a JumpCloud user group. When the time limit for access has expired, the user will be automatically removed from the user group, revoking permissions and closing security gaps. 

How it works:

  1. A user invokes a Slack command, triggering a temporary access request. 
  2. JumpCloud then pulls the groups that the user already belongs to, and Torq compares them to applications that have been configured to provide JIT access.
  3. Slack asks the user which group they would like access to and for how long.
  4. Torq then sends the access details to a designated Slack channel and requests approval on behalf of the user.
  5. If access is rejected, or the request times out, the user is notified through Slack.
  6. If access is approved, the user is added to the group in JumpCloud and receives a notification. 
  7. When the predetermined timer expires, Torq sends a command to JumpCloud to remove the user from the group, and the user is notified through Slack.

Workflow builder in Torq showing the steps for just-in-time access

As with all workflow templates, users can modify this to align with organizational policies. For example, if a log event is required, steps can be added to log the access into ServiceNow or Jira. 

Using this workflow helps consolidate work into a single medium—Slack channels—and automates user-driven tasks like requesting access. But it still maintains the crucial “human in the loop” for determining if the access is appropriate and/or necessary. Users get access when they need it, and analysts avoid the toil of small tasks. Another win for automation.

Get the JIT Workflow Template

Torq users can find this JIT workflow in the app along with many others for managing identities and access, like Suspend Accounts with No Logins after N Days and Ask User to Confirm Failed Login Attempts.

Get Started Today

Not using Torq yet? Get in touch for a trial account and see how Torq’s no-code automation accelerates security operations to deliver unparalleled protection.

ITOps vs. SecOps vs. DevOps vs. DevSecOps

ITOps, SecOps, DevOps, and DevSecOps may sound similar. And they are — to a degree. But they have different areas of focus, histories, and operational paradigms.

Keep reading for an overview of what ITOps (IT operations), SecOps (security operations), DevOps (development operations), and DevSecOps (development, security, and operations) mean and how they compare  — and why you shouldn’t worry so much about defining these terms perfectly as you should about finding ways to operationalize collaboration between your various teams.

SecOps vs. ITOps

SecOps is what you get when you combine security teams with IT operations teams, or ITOps. Put another way, it’s the integration of security into IT operations.

Traditionally, most organizations have maintained both ITOps and security teams. The ITOps team’s responsibility is to manage core IT processes — like provisioning infrastructure, deploying applications, and responding to performance issues. The security team, meanwhile, specializes in identifying and responding to security risks.

In the past, security and IT operations did not work in tandem. They pursued their various responsibilities in isolation from each other.

SecOps changes that. The big idea behind SecOps is that it combines security with ITOps in a way that maximizes collaboration.

This isn’t to say that ITOps teams are totally incapable of managing security without a SecOps mindset. Any decent IT team has always done its best to secure the environments it manages to the best of its ability. But ITOps engineers never specialize in security. The task of identifying and responding to security problems fell to a separate team of security professionals.

With SecOps, the security team works more closely with the IT operations team, and vice versa. When done well, SecOps ensures that security is an active priority across all day-to-day IT operations rather than something that is managed separately.

To be clear, SecOps doesn’t mean turning your security and ITOps teams into a single, combined team. The teams remain separate; they just work more closely together.

ITOps vs. DevOps

DevOps is a collaboration between developers and IT operations teams.

Like SecOps, DevOps was conceived to address inefficiencies associated with isolation between teams. The goal of DevOps is to ensure that developers understand the needs of ITOps when they write software, and that IT operations teams understand what developers intend for software to do when they manage it.

Also like SecOps, DevOps doesn’t erase independent development and ITOps. Some organizations may choose to create a new DevOps team alongside these two other teams, while others “do” DevOps simply by finding ways for developers and IT engineers to work more closely together. Either way, though, businesses still typically keep their development and IT operations teams.

SecOps vs. DevOps

SecOps and DevOps share key high-level similarities:

  • Their main goal is to improve collaboration between teams that would otherwise operate independently.
  • They tend to encourage automation and real-time communication in an effort to foster collaboration.
  • They increase the efficiency and scalability of complex operations.
  • They represent philosophies or goals more than specific operational frameworks. In other words, there is no specific recipe to follow or tool to use in order to enable either SecOps or DevOps. It’s up to organizations to decide how to operationalize both concepts.

The big difference between the two concepts is the specific teams involved. As we’ve noted, SecOps brings together security teams and ITOps teams, while DevOps focuses on collaboration between developers and ITOps.

So, ITOps is  part of both equations, but SecOps and DevOps are otherwise different.

What about DevSecOps?

It’s hard to talk about ITOps, SecOps, and DevOps without also mentioning DevSecOps, a concept that brings all the teams we’ve talked about so far — development, security, and IT operations — together into a collaborative model.

You can find different definitions of DevSecOps out there. Some treat it as the result of combining DevOps with SecOps. Others imply that the distinction lies in how much your DevSecOps program focuses on development as opposed to IT operations. 

One way to think about DevSecOps is that it embraces the “shift left” of security, meaning that security implementation and testing happens much earlier in software and application development as opposed to being added in afterward.

The differences between DevOps, SecOps, and DevSecOps are nuanced, but at their core they are collaborative efforts by once disparate teams looking to break down silos.

Collaboration Is the Key

The key takeaway is that with ITOps, SecOps, DevOps, and DevSecOps, collaboration is the foundation for success..

What really matters is the ability to ensure that all stakeholders — developers, IT engineers, security engineers, and anyone else who plays a role in software development and delivery — have access to the tools and data necessary to integrate security into all aspects of the software delivery process. That only happens when security becomes the responsibility of everyone, not just a specialized team of cybersecurity experts.

Whether you want to approach integrated ITOps through SecOps, DevOps, DevSecOps, or all three, your goal should be to find ways to achieve meaningful collaboration between your various teams. Don’t just think in abstract terms; think about what it means on a day-to-day basis to ensure that each team understands and can help support the goals of other teams rather than existing on its own island.

Open Source Cybersecurity: Towards a Democratized Framework

This post was previously published on The New Stack

Today, anyone can contribute to some of the world’s most important software platforms and frameworks, such as Kubernetes, the Linux kernel or Python. They can do this because these platforms are open source, meaning they are collaboratively developed by global communities.

What if we applied the same principles of democratization and free access to cybersecurity? In other words, what if anyone could contribute to security initiatives and help build a cybersecurity culture without requiring privileged access or specialized expertise?

To explore those questions, it’s worth considering the way that open source has democratized software development and comparing it to the potential we stand to realize by democratizing security. 

Although the connection between open source software and democratized security only goes so far, thinking from this angle suggests a new approach to solving the cybersecurity challenges that have become so intractable for modern businesses.

So, let’s take a look at how open source democratized development and what it might look like to do the same thing for security.

Open Source and the Democratization of Software Development

Until the origin of GNU and the Free Software Foundation in the mid-1980s, no one thought of software as something that anyone could help design or create. Instead, if you wanted to build software, you needed membership in an elite club. You generally had to be a credentialed, professional programmer, and you typically had to have a job that put you in a position to create software.

GNU began to change this by allowing volunteers to build software, like the GNU C compiler, that eventually became massively important across the IT industry. Then, in the early 1990s, the advent of the Linux kernel project changed things even more. The Linux kernel was created by an undergraduate, Linus Torvalds, whom no professional programmer had ever heard of. If a twenty-something non-professional Finnish student could create a major software platform, it was clear that anyone could.

Today, anyone still can. You don’t need a special degree or professional connections to contribute to a platform like Kubernetes, as thousands of independent programmers do. Nor do today’s users need to rely on a handful of elite programmers to write the software they want to use. They can go out and help write it themselves via participation in open source projects. (Of course, the big caveat is that you need special skills to write open source software. But we’ll get to that point later.)

As a result of this democratization of development, it has become much easier for users to obtain the software they’d like to use, as opposed to the software that closed-source vendors think they should use. Open source has also turned out to make software development faster, and it tends to lower the overall cost of software development.

Towards an Open Source Cybersecurity Framework

Now, imagine what would happen if the world of cybersecurity were democratized in the way that software development has been democratized by open source.

It would create a world where security would cease to be the domain of elite security experts alone. Instead, anyone would be able to help identify the security problems that they face, then build the solutions they need to address them, just as software users today can envision the software platforms and features they’d like to see, then help implement them through open source projects.

In other words, users wouldn’t need to wait on middlemen — that is, the experts who hold the reins — to build the security solutions they needed. They could build them themselves.

That doesn’t mean that security experts would go away. They’d still be critical, just as professional programmers working for major corporations continue to play an important role alongside independent programmers in open source software development. 

But instead of requiring small groups of security specialists to identify all the cybersecurity risks and solve them on their own, these tasks would be democratized and shared across organizations as a whole.

The result would be a world where security solutions could be implemented faster. The solutions would also more closely align with the needs of users, because the users would be building them themselves. 

And security operations would likely become less costly, too, because the work would be spread out across organizations instead of being handled only by high-earning security professionals.

The Differences Between Open Source and Security Democratization

It’s important, of course, not to overstretch the analogy between open source democratization and security democratization. In particular, there are two main differences between these concepts.

One is that, whereas open source software can be built and shared by communities at large, security is something that is mostly used only internally. The security workflows that users might define using security democratization tools would only apply inside their own companies, rather than being shared with users at large.

The other, bigger limitation is that it takes special skills to build software. While it’s possible for non-coders to contribute to open source projects by doing things like writing documentation or designing artwork, most contributions require the ability to code.

In contrast, security democratization doesn’t have to require specialized security skills on the part of users. By taking advantage of no-code tools that let anyone define, search for and remediate security risks, businesses can democratize their security cultures without having to teach every employee to be a cybersecurity expert.

What this means is that, although it may seem far-fetched at first glance to democratize security, the means for doing so — no-code automation frameworks — are already there. They just need to be placed in the hands of users.

Conclusion

Over the course of a generation, the open source software movement radically transformed the way software is built. Although plenty of closed-source code continues to exist, programmers today can contribute to critical software platforms that operate on an open source model without requiring special connections or credentials. The result has been software that better supports users, and that takes less time and money to build.

Businesses can reap similar benefits in the realm of security by using no-code security automation tools to democratize their approach to security. When anyone can define security requirements and implement tools to meet them, security becomes more efficient, more scalable and better tailored to the needs of each user or group.

4 Ways to Automate Application Security Ops

Maintaining an online business presence nowadays means that malicious actors are going to target and likely exploit any application vulnerabilities they can find sooner or later. According to the 2021 Mid Year Data Breach Report, although the number of breaches has declined by 24%, the staggering number of records that were exposed (18.8 billion) means that there is still room for improvement.

How can you protect your business from the constant threat of exposure and security breaches? One crucial step is to establish solid foundational layers of security controls that check and validate every part of the SDLC. By using automation when performing those checks, you can detect and prevent common security risks and exposures before they end up in production.

Keep reading for a comprehensive overview of application security automation, along with four ways to automate security ops to protect the core of your business from data breaches.

What Is Application Security?

The term application security (AppSec) refers to the series of processes and tools related to security controls that development teams use during SDLC. Creating secure software is hard, mainly because there are myriad risks involved. Attackers prefer to target web applications instead of infrastructure components because these applications offer a convenient way to access databases or other internal systems. Defenders need to plug up every conceivable hole, while attackers only have to find one vulnerable spot. This often results in an uneven playing field.

To counter that pervasive threat, development teams must adopt effective methodologies and best practices for developing secure software. One way to do this is to utilize tools to analyze the code both statically and dynamically to pick up any known insecure idioms. For example, a tool might flag code that is implementing unsafe casting, secrets that have been committed to VCS, or a failure to close streams after they have been used. Developers can manually review these issues and fix them before they get deployed to production.

Another strategy is to scan application dependencies. For example, when developing a financial app, developers might use an open source library that offers a convenient currency model. But how would they know that this library was safe? Dependency scanners monitor those dependencies and check to see if they are out of date or suffer from open CVEs. That way, they will know as soon as possible if anything changes.

Writing secure software starts with integrating proper application security controls and automating the process. We will explain that part next. 

The Main Benefits of Automating Application Security

As we mentioned earlier, there are several tools and processes that development teams employ to flag risks in their software repositories. Automating this task helps you make the most of this process. That’s because you can achieve better coverage when looking for threats and find them sooner when you eliminate the manual parts of the process.

In addition, you will be better equipped to respond to security incidents. Your AppSec teams will have all the context they need to address any issues. Finally, you can achieve better compliance and auditing scores, since this eliminates the risks involved in working manually, such as skipped events and slower response rates. 

Next, we’ll explain four important ways to automate application security operations.

Four Ways to Automate Application Security Ops

1. Trigger Automated Security Flows as Part of Your CI/CD Pipeline

The best place to start with automation is to implement shift-left security within the CI/CD pipelines. When we say CI/CD pipelines, we mean the various steps that are taken when pushing code in a remote environment. These steps include admission to VCS and triggering the CI pipeline, static code analyzers, security alerts, bots, and notification systems as well as external security integrations. Incorporating these steps will give you the best chance of protecting your application from exploits.

2. Validate/Enforce Requirements and Perform Periodic Checks When You Create Repositories, Components, and Cloud Environments 

When developers create new repositories or provision new clusters that operate company accounts, there should be a preliminary check to apply basic security templates and policies. This will prevent gaps or missed security controls from the moment you create those resources until you actually use them. You want to create default standards for all components that prevent them from existing in a sub-standard security state. 

3. Orchestrate Follow-Ups for Application Security Findings, Assign and Escalate Issues, and Validate Fixes 

Once the system pinpoints security issues in your resources, you should use a separate mechanism to capture those events and store them in a threat intelligence platform. As we explained in this article on the basics of threat intelligence, you can pull and combine those indicators, run customized workflows, and deliver the information you collected to the system of your choice.

4. Automate Updates to Infrastructure-as-Code and Configuration Settings

Finally, consider your usage of Infrastructure-as-Code (IaC) and your configuration settings. These internal tools are part of the developer tooling, and they are also susceptible to exploitation. You will have to enforce the same kind of rules and policies when using those programs. It’s even better if you have an automated tool that monitors and updates only the development tools in your infrastructure. This way, you will not risk exposure or a major upgrade process if some of them become outdated or are found to contain a known vulnerability.

Next Steps: Automating Application Security Ops with Torq

The best way to automate application security ops is to create a strong foundation of tools, processes, and techniques. Attackers are constantly trying to exploit vulnerable applications. However, automating application security ops doesn’t have to be complicated. In fact, security and DevOps teams should be able to use a low-code platform to achieve those targets.

Torq offers a complete no-code platform for automating application security ops using threat intelligence, threat hunting, security bots, and workflow modules. You can request a demo here.

 

Automating Cloud Security Posture Management Response

When we discuss cybersecurity and the threat of cyber attacks, many may conjure up the image of skillful hackers launching their attacks by way of undiscovered vulnerabilities or using cutting-edge technology. While this may be the case for some attacks, more often than not, vulnerabilities are revealed as a result of careless configuration and inattention to detail. Doors are left open and provide opportunities for attacks. The actual exposure in our systems is due to phishing schemes, incorrectly configured firewalls, overly permissive account permissions, and problems our engineers don’t have time to fix.

This article will introduce you to an actionable strategy to protect your environment using Cloud Security Posture Management (CSPM). We’ll describe what CSPM is and why it’s essential for your organization to implement it. We’ll also cover some of the reasons why organizations fail to implement such a strategy effectively. Finally, we’ll explore practical and straightforward approaches that your organization can pursue right away to protect your digital assets and your consumer’s data.

What Is Cloud Security Posture Management (CSPM) and Why Is It Important?

When compared with a traditional data center, the cloud offers significant advantages. Unfortunately, our approach to securing a conventional data center doesn’t translate well to the cloud, so we need to recalibrate how we think about and enforce security. CSPM or Cloud Security Posture Management is the process of automating the identification and remediation of security risks across our ecosystem.

Cloud providers such as Amazon Web Services (AWS) and Google Cloud provide an expansive range of services and capabilities. While the cloud host takes care of patch management and ensuring availability, it’s the user’s responsibility to protect their data and services from malicious actors. In recent years, several high-profile data breaches have come about due to improperly configured storage buckets or through accounts with more access than required. 

Why Is CSPM Challenging and What Does It Cover?

The public cloud offers more than simply virtual servers and databases. Modern applications are composed of a multitude of services, each with unique permissions and access-control policies. The age of DevOps requires development teams to have access to a wide range of permissions and the organization’s trust that they’ll use that responsibility carefully. Unfortunately, mistakes happen; and especially in a model where distributed user accounts and systems constantly evolve, configurations and changes require a monumental effort to manage and monitor these accounts.

Visibility would be the core principle if we had to simplify the challenges of protecting your digital ecosystem into a single concept. A comprehensive CSPM strategy consists of providing visibility into all aspects of your environment. This visibility includes:

  • Account and Permission Management
  • Service Configuration
  • Patch and Security Update Management
  • Effective and Efficient Problem Resolution
  • Vulnerability Scans of Applications and Third-party Libraries.

A CSPM solution provides visibility into each of these aspects, and tracks anomalies and suspicious changes as they happen. A CSPM automatically remediates potential problems and threats where possible, and raises appropriate alerts if automatic remediation isn’t available.

Implementing a CSPM Strategy

Implementing a successful CSPM strategy may seem a little daunting given the scope of what the solution needs to cover and the importance of achieving comprehensive coverage of your entire ecosystem. Most of the large cloud providers have services that can monitor changes within their environments. While they effectively monitor most services within their domain, they are limited to those same services. Ideally, you want to partner with an organization that has invested time and resources into CSPM solutions that can span the breadth of your organization.

Equally important to the coverage of the solution is the capacity for automation. Automated processes can be used for monitoring, analyzing, and remediation when possible. Given the dynamic nature of most environments, manual tracking may not be able to keep up with changes as your organization grows. Additionally, as with configuration and operational tasks, there is always the chance for human error, resulting in missed alerts or worse problems that are identified and then forgotten, as additional tasks and issues arise.

A successful CSPM solution uses automation extensively, monitoring and detecting problems and automatically remediating such problems or isolating them until the appropriate personnel can address them.

Practical CSPM Use Cases

Implementing an automated CSPM solution will alert you to potential vulnerabilities in your systems, misconfigured resources, and potentially harmful changes. Still, there is more to a CSPM solution than just detection and reporting.

Once the CSPM solution discovers an issue with your environment, a well-designed system will also assist with managing issues, performing such tasks as:

  • Filtering issues by priority and severity so that you can devote resources to the most critical issues first.
  • Organizing related issues and ensuring that issues aren’t duplicated across multiple systems.
  • Periodically performing additional scans and tests to determine whether vulnerabilities and issues have already been addressed.
  • Managing the assignment of issues to the appropriate owner within the organization and escalating tickets that might not be receiving proper attention.

In a nutshell, your CSPM solution should remove much of the guesswork associated with security scans, configuration management, and issue resolution. The system should handle many mundane tasks and only engage your engineers when necessary. This approach will free you and your organization to focus on delivering additional value to your customers and improving your existing offerings.

Learning More

As leaders in the field of automation, Torq is uniquely positioned to help you find and implement a CSPM solution that addresses your organization’s needs. Reach out to Torq to learn more about the services they offer and how they can work with you to improve the security of your systems and manage your cloud environments.

gRPC-web: Using gRPC in Your Front-End Application

At Torq, we use gRPC as our one and only synchronous communication protocol. Microservices communicate with each other using gRPC, our external API is exposed via gRPC and our frontend application (written using VueJS) uses the gRPC protocol to communicate with our backend services. One of the main strengths of gRPC is the community and the language support. Given some proto files, you can generate a server and a client for most programming languages. In this article, I will explain how to communicate with your gRPC backend using the great gRPC-web OSS project.

A quick overview of our architecture

We are using a microservice architecture at Torq. When we initially started, each microservice had an external and internal API endpoint.

After working that way for several months, we realized it doesn’t work as well as we’d imagined. As a result, we decided to adopt the API Gateway/Backend For Frontend approach.

gRPC-web is a JavaScript implementation of gRPC for browser clients. It gives you all the advantages of working with gRPC, such as efficient serialization, a simple IDL, and easy interface updating. A gRPC-web client connects to gRPC services via a special proxy, as shown below.

Envoy has built in support for this type of proxy.

Here you can also find an example of the grpc-web proxy implementation in Golang. We use a similar implementation at Torq.

Building gRPC Web clients

We generate gRPC clients for all our external APIs during the CI process. In the case of gRPC-web client, an npm package is generated and then published to the GitHub package repository. Then, JavaScript applications can consume this package using the npm install command.

Build example for one of our services

Sample client/server example

Our proto interface

This is a really simple service. You send it GetCurrentTimeRequest and it returns GetCurrentTimeResponse containing the text representation of time.Now().String() .

Generating clients and servers

In order to generate the clients and the servers for this proto file you need to use the protoc command. Generating gRPC-web client and JavaScript definitions requires the protoc-gen-grpc-web plugin. You can get it here or use the pre-baked docker image jfbrandhorst/grpc-web-generators that contains all the tools needed to work with grpc-web.

This is the command I’m using to generate both the Go clients/servers and the JavaScript clients:

It will put the Go clients in ./time/goclient and the JavaScript clients in ./frontend/src/jsclient.

It’s worth noting that the client generator is also able to generate TypeScript code, which you can read more about in its documentation.

Backend

A really basic Go server which just listens on 0.0.0.0:8080. It implements the TimeServiceServer interface and returns time.Now().String() for each request.

Frontend

Using gRPC-web in the frontend is pretty simple, as shown by the example below.

A small tip – I recommend enabling the gRPC-web chrome extension. It’s a great way to inspect your gRPC traffic coming from and to the browser, just like you would with the Network Activity Inspector that is built into Chrome.

Envoy configuration

Like I previously mentioned, gRPC-web needs a proxy to translate into gRPC. Envoy has native support for this, and the following configuration example for Envoy does exactly that.

Final words

I hope this article will help you easily dive into gRPC-web. It is a great technology, especially if you are already using gRPC everywhere. We’re using it with great success in our frontend application. If you’re interested in learning more, you can get started with the source code for this article here.

Adopt the “Beyonce Rule” for Scalable Impact

Recently, I started to read the invaluable book Software Engineering at Google. It’s a great book by Google, describing their engineering practices across many different domains.

One of the first chapters discusses the matter of making a “scalable impact,” which I find very interesting, and something that I believe has been overlooked by many organizations.

What is Scalable Impact?

Creating “scalable impact” is making a change that will improve your organization’s engineering practices without investing unnecessary effort for each new team member.

First, let’s review some examples that don’t demonstrate “scalable impact”.

What Hinders Scalable Impact?

1. Code Reviews

Code reviews have many advantages. Code reviews allow teams to share knowledge, catch design flaws, enforce common patterns and practices. Seems like a good idea, right? Well, the problem is —these don’t scale. The larger the team, the larger the effort, and this scales linearly with each new developer.

2. Manual QA

Similar to code reviews, manual QA for each release doesn’t scale. As a team grows, release velocity increases. As release velocity increases, more releases require manual QA —creating a bottleneck and single point of failure.

3. Manual deployment approvals

In many organizations, only a small, dedicated team can perform the actual deployment to production. Just as with manual QA, increased release velocity brought on by team growth turns this into a function that blocks scale.

4. Excessive documentation

Documentation is useful —it allows teams to share knowledge in-house and publicly without having to be there to personally explain things. But, as you create more and more documentation, there are two downsides: 1. You have to keep it up to date 2. Devs need to read it. And we DEVs, (or we as human beings…) can be lazy. We don’t like to do things that require a ton of effort; we love to take the easy way, if possible. We don’t read the docs and we definitely don’t update the docs if we change something. So in many cases, the end result is a waste of time, or an old and out-of-date document that no one even reads. In the end, the conventions you created may not be used anywhere.

How to Make Scalable Impact

Okay, so how exactly do you make scalable impact then? At Torq, we’ve adopted a number of practices that help us realize scalable impact and set our team up for successful growth. I’ll highlight each of these examples below, and talk through the details of them in a future post.

1. Centralized Linting

Let’s say that one day you decide all your developers have to avoid using errors.Wrapf and instead use fmt.Errof. At that point, most organizations tend to create a new convention page and write that down there. Later, a nice Slack message will be sent to the #rnd channel. Something like this:

“Hi there, we decided to stop using errors.Wrapf, please use only fmt.Errorf from now on. See the conventions here: …”

How many of you are familiar with this practice? If you’re familiar with it, you probably also realize this won’t work. 

Why, you ask? Because human beings don’t work well with rules, unless they’re reinforced. That wiki page describing the newest convention? It’s already depreciated the moment you wrote it.

So how do you solve that issue, then? My advice: Come up with a centralized linting solution. By centralized, I mean one that immediately affects all new builds, without any changes to your projects.

Returning to the example above, with centralized linting, you change your global linting rules. That means you immediately avoid any new usages of the old convention. No questions asked, and nothing to remember. It’s as simple as that — the PR won’t be mergeable unless the new convention is used. No documentation to maintain, no convention to teach new recruits. Your documentation is now the linting rules code. 

There you have it: Scalable Impact.

At Torq we use ReviewDog to achieve this, which I’ll describe in detail in a later post.

2. Unified/Reusable CI pipeline

Another story you may all be able to relate to: One day your CTO reaches out and asks for the full list of dependency licenses used across ALL your codebase.

So then you start googling and you find an awesome tool doing exactly that. But now you’re stuck. You have to run that tool and collect the results for ALL your projects, and we’re talking about minimum 50+ (or more for larger organizations).

Here’s the (happy!) twist: With unified CI Pipelines, this task becomes easy.

By unified, I mean maintaining a common baseline for all your CI pipelines. One that you can change from a single location, by doing only a single change.

To solve the above issue you will add that logic for license extraction to your common base and execute that as part of your CIs. Then, just let your CI do the rest.

Another (real) example: Let’s say you want to change your unit-testing runner.

You decided that gotestsum is the real deal, and your teammates chime in: “That old go test is useless. We MUST move to the new shiny tool.”

More opportunity for scalable impact: Just change the test part of your unified CI, and all your projects will use gotestsum instead of go test. 

To achieve this, we use Makefile inheritance and CircleCI orbs. Again, I’ll dig into this in a future post.

3. Automated E2E Tests

Nothing new here —each and every deployment should pass the same test suite. Again, every deployment.

Green -> deployed

Red -> rolled back

No “sanity suite”,  no “Ah, this one is flaky, you can go ahead and deploy”. No user intervention. Of course, this means your complete E2E suite should be fast and reliable (I use under 5 minutes as a rule of thumb).

Adopt the “Beyonce Rule”

“If you liked it then you shoulda put a ring on it!” said the mighty Beyonce in her famous song, Single Ladies (Put a Ring on It).

Later, malicious DEVs took that line and rephrased it to “if you like it, put a [test, lint, e2e, ticket] on it!”

Put plainly, new investments or changes require you to put the right framework in place to make them scalable. Getting started with this requires having the right tools; but after that, it’s easy to adopt the practice.

Found a new issue that can be caught by a listing rule? Easy! Add the lint rule to your centralized linting solution.

Want to add a new tool that tracks the test coverage? Simple! Change the unified CI configs.

The upfront investment here helps the entire team grow, and delivers returns (in reduced toil, and increased velocity) over time.  Applying the Beyonce Rule turns your team from Danity Kane to Destiny’s Child. It becomes super easy to add/change the existing conventions and processes. The Beyonce Rule is cost-effective and can be easily adopted.