ITOps vs. SecOps vs. DevOps vs. DevSecOps

ITOps, SecOps, DevOps, and DevSecOps may sound similar. And they are — to a degree. But they have different areas of focus, histories, and operational paradigms.

Keep reading for an overview of what ITOps (IT operations), SecOps (security operations), DevOps (development operations), and DevSecOps (development, security, and operations) mean and how they compare  — and why you shouldn’t worry so much about defining these terms perfectly as you should about finding ways to operationalize collaboration between your various teams.

SecOps vs. ITOps

SecOps is what you get when you combine security teams with IT operations teams, or ITOps. Put another way, it’s the integration of security into IT operations.

Traditionally, most organizations have maintained both ITOps and security teams. The ITOps team’s responsibility is to manage core IT processes — like provisioning infrastructure, deploying applications, and responding to performance issues. The security team, meanwhile, specializes in identifying and responding to security risks.

In the past, security and IT operations did not work in tandem. They pursued their various responsibilities in isolation from each other.

SecOps changes that. The big idea behind SecOps is that it combines security with ITOps in a way that maximizes collaboration.

This isn’t to say that ITOps teams are totally incapable of managing security without a SecOps mindset. Any decent IT team has always done its best to secure the environments it manages to the best of its ability. But ITOps engineers never specialize in security. The task of identifying and responding to security problems fell to a separate team of security professionals.

With SecOps, the security team works more closely with the IT operations team, and vice versa. When done well, SecOps ensures that security is an active priority across all day-to-day IT operations rather than something that is managed separately.

To be clear, SecOps doesn’t mean turning your security and ITOps teams into a single, combined team. The teams remain separate; they just work more closely together.

ITOps vs. DevOps

DevOps is a collaboration between developers and IT operations teams.

Like SecOps, DevOps was conceived to address inefficiencies associated with isolation between teams. The goal of DevOps is to ensure that developers understand the needs of ITOps when they write software, and that IT operations teams understand what developers intend for software to do when they manage it.

Also like SecOps, DevOps doesn’t erase independent development and ITOps. Some organizations may choose to create a new DevOps team alongside these two other teams, while others “do” DevOps simply by finding ways for developers and IT engineers to work more closely together. Either way, though, businesses still typically keep their development and IT operations teams.

SecOps vs. DevOps

SecOps and DevOps share key high-level similarities:

  • Their main goal is to improve collaboration between teams that would otherwise operate independently.
  • They tend to encourage automation and real-time communication in an effort to foster collaboration.
  • They increase the efficiency and scalability of complex operations.
  • They represent philosophies or goals more than specific operational frameworks. In other words, there is no specific recipe to follow or tool to use in order to enable either SecOps or DevOps. It’s up to organizations to decide how to operationalize both concepts.

The big difference between the two concepts is the specific teams involved. As we’ve noted, SecOps brings together security teams and ITOps teams, while DevOps focuses on collaboration between developers and ITOps.

So, ITOps is  part of both equations, but SecOps and DevOps are otherwise different.

What about DevSecOps?

It’s hard to talk about ITOps, SecOps, and DevOps without also mentioning DevSecOps, a concept that brings all the teams we’ve talked about so far — development, security, and IT operations — together into a collaborative model.

You can find different definitions of DevSecOps out there. Some treat it as the result of combining DevOps with SecOps. Others imply that the distinction lies in how much your DevSecOps program focuses on development as opposed to IT operations. 

One way to think about DevSecOps is that it embraces the “shift left” of security, meaning that security implementation and testing happens much earlier in software and application development as opposed to being added in afterward.

The differences between DevOps, SecOps, and DevSecOps are nuanced, but at their core they are collaborative efforts by once disparate teams looking to break down silos.

Collaboration Is the Key

The key takeaway is that with ITOps, SecOps, DevOps, and DevSecOps, collaboration is the foundation for success..

What really matters is the ability to ensure that all stakeholders — developers, IT engineers, security engineers, and anyone else who plays a role in software development and delivery — have access to the tools and data necessary to integrate security into all aspects of the software delivery process. That only happens when security becomes the responsibility of everyone, not just a specialized team of cybersecurity experts.

Whether you want to approach integrated ITOps through SecOps, DevOps, DevSecOps, or all three, your goal should be to find ways to achieve meaningful collaboration between your various teams. Don’t just think in abstract terms; think about what it means on a day-to-day basis to ensure that each team understands and can help support the goals of other teams rather than existing on its own island.

Open Source Cybersecurity: Towards a Democratized Framework

This post was previously published on The New Stack

Today, anyone can contribute to some of the world’s most important software platforms and frameworks, such as Kubernetes, the Linux kernel or Python. They can do this because these platforms are open source, meaning they are collaboratively developed by global communities.

What if we applied the same principles of democratization and free access to cybersecurity? In other words, what if anyone could contribute to security initiatives and help build a cybersecurity culture without requiring privileged access or specialized expertise?

To explore those questions, it’s worth considering the way that open source has democratized software development and comparing it to the potential we stand to realize by democratizing security. 

Although the connection between open source software and democratized security only goes so far, thinking from this angle suggests a new approach to solving the cybersecurity challenges that have become so intractable for modern businesses.

So, let’s take a look at how open source democratized development and what it might look like to do the same thing for security.

Open Source and the Democratization of Software Development

Until the origin of GNU and the Free Software Foundation in the mid-1980s, no one thought of software as something that anyone could help design or create. Instead, if you wanted to build software, you needed membership in an elite club. You generally had to be a credentialed, professional programmer, and you typically had to have a job that put you in a position to create software.

GNU began to change this by allowing volunteers to build software, like the GNU C compiler, that eventually became massively important across the IT industry. Then, in the early 1990s, the advent of the Linux kernel project changed things even more. The Linux kernel was created by an undergraduate, Linus Torvalds, whom no professional programmer had ever heard of. If a twenty-something non-professional Finnish student could create a major software platform, it was clear that anyone could.

Today, anyone still can. You don’t need a special degree or professional connections to contribute to a platform like Kubernetes, as thousands of independent programmers do. Nor do today’s users need to rely on a handful of elite programmers to write the software they want to use. They can go out and help write it themselves via participation in open source projects. (Of course, the big caveat is that you need special skills to write open source software. But we’ll get to that point later.)

As a result of this democratization of development, it has become much easier for users to obtain the software they’d like to use, as opposed to the software that closed-source vendors think they should use. Open source has also turned out to make software development faster, and it tends to lower the overall cost of software development.

Towards an Open Source Cybersecurity Framework

Now, imagine what would happen if the world of cybersecurity were democratized in the way that software development has been democratized by open source.

It would create a world where security would cease to be the domain of elite security experts alone. Instead, anyone would be able to help identify the security problems that they face, then build the solutions they need to address them, just as software users today can envision the software platforms and features they’d like to see, then help implement them through open source projects.

In other words, users wouldn’t need to wait on middlemen — that is, the experts who hold the reins — to build the security solutions they needed. They could build them themselves.

That doesn’t mean that security experts would go away. They’d still be critical, just as professional programmers working for major corporations continue to play an important role alongside independent programmers in open source software development. 

But instead of requiring small groups of security specialists to identify all the cybersecurity risks and solve them on their own, these tasks would be democratized and shared across organizations as a whole.

The result would be a world where security solutions could be implemented faster. The solutions would also more closely align with the needs of users, because the users would be building them themselves. 

And security operations would likely become less costly, too, because the work would be spread out across organizations instead of being handled only by high-earning security professionals.

The Differences Between Open Source and Security Democratization

It’s important, of course, not to overstretch the analogy between open source democratization and security democratization. In particular, there are two main differences between these concepts.

One is that, whereas open source software can be built and shared by communities at large, security is something that is mostly used only internally. The security workflows that users might define using security democratization tools would only apply inside their own companies, rather than being shared with users at large.

The other, bigger limitation is that it takes special skills to build software. While it’s possible for non-coders to contribute to open source projects by doing things like writing documentation or designing artwork, most contributions require the ability to code.

In contrast, security democratization doesn’t have to require specialized security skills on the part of users. By taking advantage of no-code tools that let anyone define, search for and remediate security risks, businesses can democratize their security cultures without having to teach every employee to be a cybersecurity expert.

What this means is that, although it may seem far-fetched at first glance to democratize security, the means for doing so — no-code automation frameworks — are already there. They just need to be placed in the hands of users.

Conclusion

Over the course of a generation, the open source software movement radically transformed the way software is built. Although plenty of closed-source code continues to exist, programmers today can contribute to critical software platforms that operate on an open source model without requiring special connections or credentials. The result has been software that better supports users, and that takes less time and money to build.

Businesses can reap similar benefits in the realm of security by using no-code security automation tools to democratize their approach to security. When anyone can define security requirements and implement tools to meet them, security becomes more efficient, more scalable and better tailored to the needs of each user or group.

4 Ways to Automate Application Security Ops

Maintaining an online business presence nowadays means that malicious actors are going to target and likely exploit any application vulnerabilities they can find sooner or later. According to the 2021 Mid Year Data Breach Report, although the number of breaches has declined by 24%, the staggering number of records that were exposed (18.8 billion) means that there is still room for improvement.

How can you protect your business from the constant threat of exposure and security breaches? One crucial step is to establish solid foundational layers of security controls that check and validate every part of the SDLC. By using automation when performing those checks, you can detect and prevent common security risks and exposures before they end up in production.

Keep reading for a comprehensive overview of application security automation, along with four ways to automate security ops to protect the core of your business from data breaches.

What Is Application Security?

The term application security (AppSec) refers to the series of processes and tools related to security controls that development teams use during SDLC. Creating secure software is hard, mainly because there are myriad risks involved. Attackers prefer to target web applications instead of infrastructure components because these applications offer a convenient way to access databases or other internal systems. Defenders need to plug up every conceivable hole, while attackers only have to find one vulnerable spot. This often results in an uneven playing field.

To counter that pervasive threat, development teams must adopt effective methodologies and best practices for developing secure software. One way to do this is to utilize tools to analyze the code both statically and dynamically to pick up any known insecure idioms. For example, a tool might flag code that is implementing unsafe casting, secrets that have been committed to VCS, or a failure to close streams after they have been used. Developers can manually review these issues and fix them before they get deployed to production.

Another strategy is to scan application dependencies. For example, when developing a financial app, developers might use an open source library that offers a convenient currency model. But how would they know that this library was safe? Dependency scanners monitor those dependencies and check to see if they are out of date or suffer from open CVEs. That way, they will know as soon as possible if anything changes.

Writing secure software starts with integrating proper application security controls and automating the process. We will explain that part next. 

The Main Benefits of Automating Application Security

As we mentioned earlier, there are several tools and processes that development teams employ to flag risks in their software repositories. Automating this task helps you make the most of this process. That’s because you can achieve better coverage when looking for threats and find them sooner when you eliminate the manual parts of the process.

In addition, you will be better equipped to respond to security incidents. Your AppSec teams will have all the context they need to address any issues. Finally, you can achieve better compliance and auditing scores, since this eliminates the risks involved in working manually, such as skipped events and slower response rates. 

Next, we’ll explain four important ways to automate application security operations.

Four Ways to Automate Application Security Ops

1. Trigger Automated Security Flows as Part of Your CI/CD Pipeline

The best place to start with automation is to implement shift-left security within the CI/CD pipelines. When we say CI/CD pipelines, we mean the various steps that are taken when pushing code in a remote environment. These steps include admission to VCS and triggering the CI pipeline, static code analyzers, security alerts, bots, and notification systems as well as external security integrations. Incorporating these steps will give you the best chance of protecting your application from exploits.

2. Validate/Enforce Requirements and Perform Periodic Checks When You Create Repositories, Components, and Cloud Environments 

When developers create new repositories or provision new clusters that operate company accounts, there should be a preliminary check to apply basic security templates and policies. This will prevent gaps or missed security controls from the moment you create those resources until you actually use them. You want to create default standards for all components that prevent them from existing in a sub-standard security state. 

3. Orchestrate Follow-Ups for Application Security Findings, Assign and Escalate Issues, and Validate Fixes 

Once the system pinpoints security issues in your resources, you should use a separate mechanism to capture those events and store them in a threat intelligence platform. As we explained in this article on the basics of threat intelligence, you can pull and combine those indicators, run customized workflows, and deliver the information you collected to the system of your choice.

4. Automate Updates to Infrastructure-as-Code and Configuration Settings

Finally, consider your usage of Infrastructure-as-Code (IaC) and your configuration settings. These internal tools are part of the developer tooling, and they are also susceptible to exploitation. You will have to enforce the same kind of rules and policies when using those programs. It’s even better if you have an automated tool that monitors and updates only the development tools in your infrastructure. This way, you will not risk exposure or a major upgrade process if some of them become outdated or are found to contain a known vulnerability.

Next Steps: Automating Application Security Ops with Torq

The best way to automate application security ops is to create a strong foundation of tools, processes, and techniques. Attackers are constantly trying to exploit vulnerable applications. However, automating application security ops doesn’t have to be complicated. In fact, security and DevOps teams should be able to use a low-code platform to achieve those targets.

Torq offers a complete no-code platform for automating application security ops using threat intelligence, threat hunting, security bots, and workflow modules. You can request a demo here.

 

Automating Cloud Security Posture Management Response

When we discuss cybersecurity and the threat of cyber attacks, many may conjure up the image of skillful hackers launching their attacks by way of undiscovered vulnerabilities or using cutting-edge technology. While this may be the case for some attacks, more often than not, vulnerabilities are revealed as a result of careless configuration and inattention to detail. Doors are left open and provide opportunities for attacks. The actual exposure in our systems is due to phishing schemes, incorrectly configured firewalls, overly permissive account permissions, and problems our engineers don’t have time to fix.

This article will introduce you to an actionable strategy to protect your environment using Cloud Security Posture Management (CSPM). We’ll describe what CSPM is and why it’s essential for your organization to implement it. We’ll also cover some of the reasons why organizations fail to implement such a strategy effectively. Finally, we’ll explore practical and straightforward approaches that your organization can pursue right away to protect your digital assets and your consumer’s data.

What Is Cloud Security Posture Management (CSPM) and Why Is It Important?

When compared with a traditional data center, the cloud offers significant advantages. Unfortunately, our approach to securing a conventional data center doesn’t translate well to the cloud, so we need to recalibrate how we think about and enforce security. CSPM or Cloud Security Posture Management is the process of automating the identification and remediation of security risks across our ecosystem.

Cloud providers such as Amazon Web Services (AWS) and Google Cloud provide an expansive range of services and capabilities. While the cloud host takes care of patch management and ensuring availability, it’s the user’s responsibility to protect their data and services from malicious actors. In recent years, several high-profile data breaches have come about due to improperly configured storage buckets or through accounts with more access than required. 

Why Is CSPM Challenging and What Does It Cover?

The public cloud offers more than simply virtual servers and databases. Modern applications are composed of a multitude of services, each with unique permissions and access-control policies. The age of DevOps requires development teams to have access to a wide range of permissions and the organization’s trust that they’ll use that responsibility carefully. Unfortunately, mistakes happen; and especially in a model where distributed user accounts and systems constantly evolve, configurations and changes require a monumental effort to manage and monitor these accounts.

Visibility would be the core principle if we had to simplify the challenges of protecting your digital ecosystem into a single concept. A comprehensive CSPM strategy consists of providing visibility into all aspects of your environment. This visibility includes:

  • Account and Permission Management
  • Service Configuration
  • Patch and Security Update Management
  • Effective and Efficient Problem Resolution
  • Vulnerability Scans of Applications and Third-party Libraries.

A CSPM solution provides visibility into each of these aspects, and tracks anomalies and suspicious changes as they happen. A CSPM automatically remediates potential problems and threats where possible, and raises appropriate alerts if automatic remediation isn’t available.

Implementing a CSPM Strategy

Implementing a successful CSPM strategy may seem a little daunting given the scope of what the solution needs to cover and the importance of achieving comprehensive coverage of your entire ecosystem. Most of the large cloud providers have services that can monitor changes within their environments. While they effectively monitor most services within their domain, they are limited to those same services. Ideally, you want to partner with an organization that has invested time and resources into CSPM solutions that can span the breadth of your organization.

Equally important to the coverage of the solution is the capacity for automation. Automated processes can be used for monitoring, analyzing, and remediation when possible. Given the dynamic nature of most environments, manual tracking may not be able to keep up with changes as your organization grows. Additionally, as with configuration and operational tasks, there is always the chance for human error, resulting in missed alerts or worse problems that are identified and then forgotten, as additional tasks and issues arise.

A successful CSPM solution uses automation extensively, monitoring and detecting problems and automatically remediating such problems or isolating them until the appropriate personnel can address them.

Practical CSPM Use Cases

Implementing an automated CSPM solution will alert you to potential vulnerabilities in your systems, misconfigured resources, and potentially harmful changes. Still, there is more to a CSPM solution than just detection and reporting.

Once the CSPM solution discovers an issue with your environment, a well-designed system will also assist with managing issues, performing such tasks as:

  • Filtering issues by priority and severity so that you can devote resources to the most critical issues first.
  • Organizing related issues and ensuring that issues aren’t duplicated across multiple systems.
  • Periodically performing additional scans and tests to determine whether vulnerabilities and issues have already been addressed.
  • Managing the assignment of issues to the appropriate owner within the organization and escalating tickets that might not be receiving proper attention.

In a nutshell, your CSPM solution should remove much of the guesswork associated with security scans, configuration management, and issue resolution. The system should handle many mundane tasks and only engage your engineers when necessary. This approach will free you and your organization to focus on delivering additional value to your customers and improving your existing offerings.

Learning More

As leaders in the field of automation, Torq is uniquely positioned to help you find and implement a CSPM solution that addresses your organization’s needs. Reach out to Torq to learn more about the services they offer and how they can work with you to improve the security of your systems and manage your cloud environments.

gRPC-web: Using gRPC in Your Front-End Application

At Torq, we use gRPC as our one and only synchronous communication protocol. Microservices communicate with each other using gRPC, our external API is exposed via gRPC and our frontend application (written using VueJS) uses the gRPC protocol to communicate with our backend services. One of the main strengths of gRPC is the community and the language support. Given some proto files, you can generate a server and a client for most programming languages. In this article, I will explain how to communicate with your gRPC backend using the great gRPC-web OSS project.

A quick overview of our architecture

We are using a microservice architecture at Torq. When we initially started, each microservice had an external and internal API endpoint.

After working that way for several months, we realized it doesn’t work as well as we’d imagined. As a result, we decided to adopt the API Gateway/Backend For Frontend approach.

gRPC-web is a JavaScript implementation of gRPC for browser clients. It gives you all the advantages of working with gRPC, such as efficient serialization, a simple IDL, and easy interface updating. A gRPC-web client connects to gRPC services via a special proxy, as shown below.

Envoy has built in support for this type of proxy.

Here you can also find an example of the grpc-web proxy implementation in Golang. We use a similar implementation at Torq.

Building gRPC Web clients

We generate gRPC clients for all our external APIs during the CI process. In the case of gRPC-web client, an npm package is generated and then published to the GitHub package repository. Then, JavaScript applications can consume this package using the npm install command.

Build example for one of our services

Sample client/server example

Our proto interface

This is a really simple service. You send it GetCurrentTimeRequest and it returns GetCurrentTimeResponse containing the text representation of time.Now().String() .

Generating clients and servers

In order to generate the clients and the servers for this proto file you need to use the protoc command. Generating gRPC-web client and JavaScript definitions requires the protoc-gen-grpc-web plugin. You can get it here or use the pre-baked docker image jfbrandhorst/grpc-web-generators that contains all the tools needed to work with grpc-web.

This is the command I’m using to generate both the Go clients/servers and the JavaScript clients:

It will put the Go clients in ./time/goclient and the JavaScript clients in ./frontend/src/jsclient.

It’s worth noting that the client generator is also able to generate TypeScript code, which you can read more about in its documentation.

Backend

A really basic Go server which just listens on 0.0.0.0:8080. It implements the TimeServiceServer interface and returns time.Now().String() for each request.

Frontend

Using gRPC-web in the frontend is pretty simple, as shown by the example below.

A small tip – I recommend enabling the gRPC-web chrome extension. It’s a great way to inspect your gRPC traffic coming from and to the browser, just like you would with the Network Activity Inspector that is built into Chrome.

Envoy configuration

Like I previously mentioned, gRPC-web needs a proxy to translate into gRPC. Envoy has native support for this, and the following configuration example for Envoy does exactly that.

Final words

I hope this article will help you easily dive into gRPC-web. It is a great technology, especially if you are already using gRPC everywhere. We’re using it with great success in our frontend application. If you’re interested in learning more, you can get started with the source code for this article here.

Adopt the “Beyonce Rule” for Scalable Impact

Recently, I started to read the invaluable book Software Engineering at Google. It’s a great book by Google, describing their engineering practices across many different domains.

One of the first chapters discusses the matter of making a “scalable impact,” which I find very interesting, and something that I believe has been overlooked by many organizations.

What is Scalable Impact?

Creating “scalable impact” is making a change that will improve your organization’s engineering practices without investing unnecessary effort for each new team member.

First, let’s review some examples that don’t demonstrate “scalable impact”.

What Hinders Scalable Impact?

1. Code Reviews

Code reviews have many advantages. Code reviews allow teams to share knowledge, catch design flaws, enforce common patterns and practices. Seems like a good idea, right? Well, the problem is —these don’t scale. The larger the team, the larger the effort, and this scales linearly with each new developer.

2. Manual QA

Similar to code reviews, manual QA for each release doesn’t scale. As a team grows, release velocity increases. As release velocity increases, more releases require manual QA —creating a bottleneck and single point of failure.

3. Manual deployment approvals

In many organizations, only a small, dedicated team can perform the actual deployment to production. Just as with manual QA, increased release velocity brought on by team growth turns this into a function that blocks scale.

4. Excessive documentation

Documentation is useful —it allows teams to share knowledge in-house and publicly without having to be there to personally explain things. But, as you create more and more documentation, there are two downsides: 1. You have to keep it up to date 2. Devs need to read it. And we DEVs, (or we as human beings…) can be lazy. We don’t like to do things that require a ton of effort; we love to take the easy way, if possible. We don’t read the docs and we definitely don’t update the docs if we change something. So in many cases, the end result is a waste of time, or an old and out-of-date document that no one even reads. In the end, the conventions you created may not be used anywhere.

How to Make Scalable Impact

Okay, so how exactly do you make scalable impact then? At Torq, we’ve adopted a number of practices that help us realize scalable impact and set our team up for successful growth. I’ll highlight each of these examples below, and talk through the details of them in a future post.

1. Centralized Linting

Let’s say that one day you decide all your developers have to avoid using errors.Wrapf and instead use fmt.Errof. At that point, most organizations tend to create a new convention page and write that down there. Later, a nice Slack message will be sent to the #rnd channel. Something like this:

“Hi there, we decided to stop using errors.Wrapf, please use only fmt.Errorf from now on. See the conventions here: …”

How many of you are familiar with this practice? If you’re familiar with it, you probably also realize this won’t work. 

Why, you ask? Because human beings don’t work well with rules, unless they’re reinforced. That wiki page describing the newest convention? It’s already depreciated the moment you wrote it.

So how do you solve that issue, then? My advice: Come up with a centralized linting solution. By centralized, I mean one that immediately affects all new builds, without any changes to your projects.

Returning to the example above, with centralized linting, you change your global linting rules. That means you immediately avoid any new usages of the old convention. No questions asked, and nothing to remember. It’s as simple as that — the PR won’t be mergeable unless the new convention is used. No documentation to maintain, no convention to teach new recruits. Your documentation is now the linting rules code. 

There you have it: Scalable Impact.

At Torq we use ReviewDog to achieve this, which I’ll describe in detail in a later post.

2. Unified/Reusable CI pipeline

Another story you may all be able to relate to: One day your CTO reaches out and asks for the full list of dependency licenses used across ALL your codebase.

So then you start googling and you find an awesome tool doing exactly that. But now you’re stuck. You have to run that tool and collect the results for ALL your projects, and we’re talking about minimum 50+ (or more for larger organizations).

Here’s the (happy!) twist: With unified CI Pipelines, this task becomes easy.

By unified, I mean maintaining a common baseline for all your CI pipelines. One that you can change from a single location, by doing only a single change.

To solve the above issue you will add that logic for license extraction to your common base and execute that as part of your CIs. Then, just let your CI do the rest.

Another (real) example: Let’s say you want to change your unit-testing runner.

You decided that gotestsum is the real deal, and your teammates chime in: “That old go test is useless. We MUST move to the new shiny tool.”

More opportunity for scalable impact: Just change the test part of your unified CI, and all your projects will use gotestsum instead of go test. 

To achieve this, we use Makefile inheritance and CircleCI orbs. Again, I’ll dig into this in a future post.

3. Automated E2E Tests

Nothing new here —each and every deployment should pass the same test suite. Again, every deployment.

Green -> deployed

Red -> rolled back

No “sanity suite”,  no “Ah, this one is flaky, you can go ahead and deploy”. No user intervention. Of course, this means your complete E2E suite should be fast and reliable (I use under 5 minutes as a rule of thumb).

Adopt the “Beyonce Rule”

“If you liked it then you shoulda put a ring on it!” said the mighty Beyonce in her famous song, Single Ladies (Put a Ring on It).

Later, malicious DEVs took that line and rephrased it to “if you like it, put a [test, lint, e2e, ticket] on it!”

Put plainly, new investments or changes require you to put the right framework in place to make them scalable. Getting started with this requires having the right tools; but after that, it’s easy to adopt the practice.

Found a new issue that can be caught by a listing rule? Easy! Add the lint rule to your centralized linting solution.

Want to add a new tool that tracks the test coverage? Simple! Change the unified CI configs.

The upfront investment here helps the entire team grow, and delivers returns (in reduced toil, and increased velocity) over time.  Applying the Beyonce Rule turns your team from Danity Kane to Destiny’s Child. It becomes super easy to add/change the existing conventions and processes. The Beyonce Rule is cost-effective and can be easily adopted.

Threat Hunting Like a Pro — With Automation

It’s no secret that cyber attacks are on the rise. Not only are they becoming more frequent, but the malicious actors who mount these attacks are constantly improving their skills and evolving the tools in their arsenals. Protecting your organization is challenging at best; especially since we measure the return on investment for cybersecurity as ‘preventing losses’ instead of ‘increasing revenue.’

Threat hunting is a proactive approach to securing your systems. Unfortunately, manual threat hunting can be time-consuming and labor-intensive. Combine that with a shortage of trained and talented threat hunters in our industry, and it is apparent that we need a different and more effective approach to the problem. This article will investigate the challenges involved with threat hunting and explore how you can automate the process of threat hunting in your organization to proactively improve your applications and systems’ security without requiring an excessive investment.

What Is Threat Hunting?

Cyber attacks come in many different forms. Aggressive tactics, such as those used in a distributed denial of service (DDoS) attack, are easy to identify. However, it is the more subtle attacks — the ones that quietly infiltrate your systems, compromise security from the inside, and steal data — that are the most dangerous and the hardest to detect. Threat hunting is how organizations identify and mitigate these threats.

Successful cyber attacks require patience, combined with a variety of tools and intelligence. The attacker might start by compromising an authorized user’s account using a phishing scheme or social engineering. Once they assume a valid identity, they attempt to elevate their privileges, leverage known vulnerabilities, or install malware to find and extract data within the corporate environment. Ideally, the attacker tries to accomplish all of this without triggering traditional security monitoring systems.

The threat hunter uses monitoring, identification of suspicious patterns, and other proactive tools to identify and mitigate such attacks before compromising the system’s integrity. Like the would-be hacker, the threat hunter requires patience, cunning, and access to a comprehensive set of tools. Automation and machine learning further enhance the role of threat hunting by gathering data, identifying suspicious patterns in real-time, reducing human error, and freeing up resources to improve existing processes.

Why Threat Hunting Can Be Challenging

Public cloud providers, such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure, help companies expand their capabilities, and scale in unprecedented ways. Unfortunately, this potential growth also increases the attack surface for an organization’s systems. The attack surface isn’t limited just to the infrastructure hosting applications and data. Malicious actors use email, identity management, and all other corporate systems as part of an attack on the organization.

It is incredibly challenging to support an effective threat hunting initiative, given the extensive nature of an organization’s system, the evolving nature of attacks, and the expense of hiring well-trained experts from a limited talent pool.

Leverage the Experience of Experts

You don’t have to stand alone against attacks on your organization. Fortunately, cybersecurity is a common problem, and as such, there are experienced and talented experts who dedicate their time to supporting organizations like yours. You can supplement your security initiative by utilizing these tools directly or by partnering with an organization like Torq that provides tooling and automation for a more comprehensive solution. 

When looking for a security solution, ideally you want to find one that offers Extended Detection and Response (XDR) integrations to monitor, detect and respond to potential attacks on:

  • Network Endpoints
  • Cloud and Data Center Workloads
  • Corporate Firewalls
  • Identity Management Systems
  • Email 

Information and anomalies from each system can be correlated and analyzed to identify potentially malicious activity and instances of compromise.

Gaining the Advantage with Automation

XDR security solutions provide your threat hunting team with the tools they need to actively monitor and detect threats to your systems. When you integrate them with automation tools, such as those available from Torq, you create a scalable, efficient system that can work around-the-clock to keep your systems secure.

Let’s look at some potential use cases that you can address with an automated threat detection solution. The most critical use of such a system is to identify events or activities that might indicate a potential threat. The system collects this information by querying events and agents within the network, and enriching them with related information. External services such as Joe Security and VirusTotal, among others, are used for a more complete picture of the threats involved.

The comprehensive alert information is automatically correlated and analyzed against all events to identify and provide comprehensive alerts about possible attacks. For known and familiar attacks, the system can automatically remediate the attack, and suppress warnings before the support team is notified. 

Once the system identifies an attack, it is critical to respond as quickly as possible. Using an automated process to isolate and quarantine suspicious human and machine entities, processes, or emails within your system reduces the blast radius of the attack and limits additional exposure.

Supporting Constant Change

Our systems have evolved dramatically from the old monoliths with periodic changes based on a release schedule. In the modern era of DevOps, our systems morph and change constantly. Automating security scans on new and existing infrastructure is critical to ensure the integrity of your environment. As you add new devices and remove retired ones, you can automate updates to allow-lists while at the same time updating deny-lists based on indicators of compromise (IOC). 

As you identify vulnerabilities and create or modify security rules for different user groups or security groups, an effective automation suite will facilitate the system’s propagation of the necessary changes. Automating these processes ensures that your systems remain up-to-date with the latest security patches and changes.

Learning More

Even though the systems we develop and support are unique and different depending on our client’s needs, we share the common need for security and to protect the data with which our clients entrust us. We don’t need to face these attacks alone, and partnering with experts in security and automation can help us better protect and secure our systems.

If you’d like to learn more about how Torq can help you more effectively hunt threats, reach out to us for no-code automation to support your security teams, and keep you one step ahead.

 

5 Security Automation Examples for Non-Developers

If you’re a developer who lives and breathes code all day, you probably don’t mind having to write complex configuration files to set up an automation tool or configure a management policy.

But the fact is that many of the stakeholders who stand to benefit from security automation are not developers. They’re IT engineers, test engineers, help desk staff, or other types of employees who may have some coding skills, but not enough to generate the hundreds of lines of code necessary to set up the typical automation tool.

Fortunately for non-developers, there are ways to leverage security automation without drowning in manually written code. Here’s a look at five examples of how non-developers can take advantage of security automation while writing little, if any, code.

1. Low-Code Configuration for Detection Rules

Detection rules are the policies that tell security automation tools what to identify as a potential breach. They could configure a tool to detect multiple failed login requests from an unknown host, for instance, or repeated attempts to access non-existent URLs (which could reflect efforts by attackers to discover unprotected URLs that contain sensitive content).

Traditionally, writing rules to detect events like these required writing a lot of custom code, which only developers were good at doing. A faster and easier approach today is to use a low-code technique that allows anyone – not just developers – to specify which types of events security tools should monitor for and then generate the necessary configuration code automatically.

When you take this approach, any engineer can say, “I want to detect repeated login failures” or “I want to detect high rates of 404 responses from a single host,” and then generate the requisite code automatically.

2. Automated Incident Response Playbooks

Along similar lines, you don’t need to be a developer to specify which steps automation tools should take when they detect a security incident.

Instead, non-developers can indicate their intent, which may be something like “I want to block a host’s IP range if the host is previously unknown to the network and more than three failed login attempts originate from the host in under a minute.” Then, automation tools will generate the code necessary to configure security tools to enforce that rule instantly whenever the specified condition is triggered.

3. Automatically Trigger Endpoint Scanning

Whenever a possible security incident arises, automatic scanning of impacted endpoints is a basic best practice for determining the extent of any breach and isolating compromised hosts from the network.

However, performing endpoint scanning across a large number of hosts can be difficult. It has traditionally required either a large amount of manual effort (if you perform each scan by hand) or the authoring of code that will tell your scanning tools to run the scans automatically based on the host and access data you give them. Either way, the process was slow and required collaboration between multiple stakeholders.

However, by using an approach where endpoint scans are configured and executed automatically teams can perform this important step much faster. For instance, if helpdesk staff who are supporting end-users notice the possible presence of malware on a user’s device, they can automatically request scans of all endpoints associated with that user (or with the user’s group or business unit) rather than having to ask developers to configure the scans for them.

4. Automatically Generate Security Testing Code During CI/CD

The testing stage of the CI/CD pipeline has traditionally focused on testing for application performance and reliability rather than security.

That’s not because test engineers deem security unimportant. It’s because most of them aren’t security engineers, and they don’t want to spend lots of time writing code to automate pre-deployment security testing on top of performance testing.

This is another context in which automatically generated code can help integrate security into a process in which it has traditionally played little role due to the complexity of generating the necessary security code. When test engineers can indicate the types of security risks they want to test for within application release candidates (like injection vulnerabilities) and then automatically generate the code they need to run those tests, it becomes much easier to make security testing part and parcel of the broader CI/CD testing process.

5. Update Security Automation Rules for a New Environment

Your business may already have security configuration code in place. That’s great – until you decide to make a change like moving to a new cloud or migrating from VMs to containers, at which point your rules need to be rewritten.

You could update the rules by having security analysts and developers work together tediously to figure out what needs to change and how to change it. Or, you could use low-code security automation tools to generate the new code automatically. There may be some tweaks left for your team to perform manually, but the bulk of the heavy lifting required to secure your new setup can be performed automatically.

Extending Security Automation to Non-Developers

Security automation is a powerful methodology. But given that non-coders are often the stakeholders most in need of security automation, platforms that require stanza upon stanza of manual configuration code to do their job make it difficult – to say the least – for most businesses to leverage security automation to the fullest effect.

That’s why the future of security automation lies in solutions that generate the necessary code and configurations automatically, allowing all stakeholders to implement the security checks and responses they need in order to protect their assets without having to learn to code or lean on developers to write code for them.

5 Automated Anti-Phishing Protection Techniques

In an age when attackers create over a million phishing sites each month, and phishing serves as a beachhead for 95 percent of all attacks against enterprise networks, how can businesses respond?

Part of the answer lies in educating users to recognize and report phishing, of course. But user education only goes so far – particularly because the same statistics cited above show that, on average, only 3 percent of users will report phishing emails. Strong anti-phishing education may increase that number, but you’re still fighting an uphill battle if you rely on your users as your primary means of defense against phishing.

Instead, teams should lean as much as possible on automated anti-phishing techniques. By using automation to detect and respond to phishing attempts, businesses can stop the majority of phishing messages before they ever reach end-users.

Keep reading for an overview of five practical strategies for automatically detecting and managing phishing attacks.

Filter Messages Based on Multiple Attributes

Most security and IT teams know that they should automatically filter incoming email for signs of malicious content.

However, the mistake that many teams (and security tools) make in this regard is focusing just on one attribute of messages – typically, the content of the message itself – when performing scans.

Although scanning for signs of phishing like misspelled words or suspicious domain names is one way to detect phishing messages, it’s hardly the only one. A better approach is to evaluate each message based on multiple attributes – its content, the domain from which it originated, whether or not it contains an attachment, which kind of attachment, and so on – to build a more informed assessment of whether it may be phishing.

This multifaceted analysis is especially important for automatically catching phishing attempts, given that attackers have gotten much better at crafting good phishing content. The days are long gone when simply scanning email for strings like “Nigerian prince” guaranteed that you’d catch the phishers.

Detonate Attachments in Sandboxes

If your security tools detect possible malicious content but you need an extra level of confirmation, you can take the automated response a step further by “detonating” attachments — or downloading and opening any links that the phishing content included —  inside a sandboxed environment.

By installing the malicious content in a safe, isolated location and evaluating what happens, you can detect anomalies or attack signatures that will confirm that the content is indeed malicious.

Of course, the original content should remain quarantined and inaccessible to your end-users while your tools perform the sandboxed detonation. You can either safely release the content to users or block it definitively, pending the results of the sandbox analysis.

Block Sender Names and Domains Automatically

If you detect a phishing attempt, you can minimize its impact by using automation tools to block the sender’s name and domain as quickly as possible. Doing so minimizes the number of emails or other messages that the phishers are able to send to your users. It also disrupts their ability to engage with any users whom they successfully trick into responding to them.

And, by blocking not just malicious sender names but entire domains, you make it much harder for the phishers to continue their attack using multiple accounts.

Automatically Scan Affected Endpoints

Another step that you should take immediately and automatically upon detecting a phishing email is to scan any endpoints – such as the affected user’s PC or phone – that are associated with it.

Immediate scanning will maximize your chances of detecting and isolating any malware that the phishers may have been able to deploy.

Reset Affected User Credentials

Along with scanning impacted endpoints, you should also use automation tools to reset the login credentials for users who may have been impacted by a phishing attack. By logging them out of any open sessions and forcing a password change, you also mitigate the ability of attackers to exploit accounts that they compromised through phishing.

Automation as the Future of Anti-Phishing

The phishers are only going to get better at what they do. To keep up, businesses need to become more efficient in their responses. That means adopting automated anti-phishing tools that allow teams not just to detect phishing attacks as quickly and as accurately as possible, but also to minimize the potential impact of a successful phishing breach on the IT estate.

Automated Threat Intelligence: An Overview

SecOps and security teams spend an excessive amount of time sifting through low-value, poorly-contextualized alarm data rather than actively hunting for valid threats. This is because bad actors are constantly looking to steal whatever they can hold onto with the least exposure. Recent ransomware attacks in critical business sectors only serve as reminders that organizations cannot lie dormant.

This blog post will unpack strategies to help overcome these challenges and explain why integrating threat intelligence with security orchestration and automation is critical for an effective security operations strategy.

What Is Threat Intelligence and Why Is It Needed?

Threat intelligence is the evidence-based collection of information and the observation of the capabilities, techniques, motives, goals, and targets of an existing threat. Simply put, it’s everything that you know about your attacker – actual or potential – based upon their motives and how bad they can damage your business assets.

Threat intelligence is not a checklist. It’s a cycle of well-defined processes and operations that involves collecting and managing potentially valuable pieces of information called observables, cleaning and normalizing these obersvables, comparing them to current data to remove duplicates, and then storing them in a structured, human-readable format. 

However, transforming raw collections of data into valuable and actionable intelligence observables requires a lot of effort. The data must pass through many layers of processing and evaluation before reaching the end product. According to established practice, you should have a six-part cycle of data collection that consists of direction, collection, processing, analysis, dissemination, and finally, feedback. Due to the nature of these operations, you need to keep an eye out for new threats and an eye on your adversaries’ capabilities at all times. It’s also just as important to maximize your use of resources

You need to be able to identify the most critical threats and act on them before they make their move – and doing so accurately means that you can stay alive longer. Therefore, the first and most important part of operating a threat intelligence network is to figure out how to automate the whole security orchestration.

6 Ways to Automate Your Threat Intelligence

As we’ve mentioned, the most effective way to gather actionable and valuable threat intelligence is through security orchestration and automation. The general operations that you need to automate may include the following:

1. Pulling relevant observables from alerts or emails into the right IoC

Observables are often stored as strings that represent hashes or registry keys. They can even be stored as event types (such as the creation or deletion of certain files). These events usually come from automated systems that monitor pertinent files and system components that are critical to the operation of computers and networks. You will need to be able to pull observables from emails, Slack messages, or alerts into relevant Indicators of Compromise (IoC) containers.

2. Creating tickets/issues on tracker software

Once the IoC containers have been populated with observables, you will need to set up automatic alerts based on specific rules and conditions, such as when events match criteria for generating suspicious files or deleting sensitive log files from the system. Creating tickets and triggering incident response systems will help bring people up to date on any suspicious activity.

3. Delivering results through email and instant messaging

Effective communication means providing relevant parties with actionable information when an IoC needs attention. This can be accomplished through email, instant messaging, or applications.

4. Collecting more information about IP, domain, email, file, and signatures from various sources

When collecting observables, you will need to expand their origin from several vetted and established sources. This could include critical, public, or private organizations like SANS Internet Storm Center or DomainTools. All of the feeds need to be cleaned, parsed, and stored in the same structure for further analysis.

5. Performing contextual log searches for IP, domain, email, file, and signatures

Searching for matching IoC based on specific IP, domain, email, file, or signatures should be quick, accurate, and thorough. Another way to improve this process is to enable the saving of search queries so that they can be attached to automated alerts.

6. Offering IoC block settings

IoCs are significant indicators that a particular resource has likely been compromised. Services and operators need to respond to actionable events in case there are active threats, and they should be able to create blacklists to block those threats quickly.

What Are the Key Challenges in Implementing Threat Intelligence Automation?

Implementing threat intelligence automation faces several challenges, including the complexity of integrating diverse security tools, the need for skilled personnel to manage and interpret automated processes, and ensuring the accuracy and timeliness of the threat data being analyzed. Organizations must navigate these hurdles by fostering a culture of continuous learning, investing in training for their security teams, and choosing scalable, interoperable solutions that can adapt to evolving threats and technologies.

How Does Threat Intelligence Automation Enhance Incident Response?

Threat intelligence automation enhances incident response by accelerating the detection, analysis, and containment of threats. By automating the collection and correlation of threat data, organizations can quickly identify indicators of compromise (IoCs) and initiate predefined response protocols. This rapid response capability minimizes the window of opportunity for attackers, reduces the impact of breaches, and enables a more proactive defense posture. Furthermore, automation ensures that incident response teams are focused on high-value tasks, such as threat hunting and strategic analysis, rather than being bogged down by manual data processing.

What Role Does Artificial Intelligence (AI) Play in Threat Intelligence Automation?

Artificial Intelligence (AI) plays a pivotal role in threat intelligence automation by enabling advanced analytics, pattern recognition, and predictive capabilities. AI algorithms can sift through vast amounts of data at unprecedented speeds, identifying anomalies, trends, and potential threats that might elude human analysts. This not only improves the accuracy and efficiency of threat detection but also allows organizations to anticipate and prepare for emerging threats. AI-driven automation can adapt to new tactics employed by attackers, continuously learning from the latest threat intelligence and adjusting defensive measures accordingly.

Getting Started with Automated Threat Intelligence

Automating your threat intelligence initiatives is not without its challenges, chief among which is an organization’s willingness to step up their security operations and transform the way they do business in a digital online world where they are constantly under threat of attack.

Threat intelligence is a good way for organizations to take the offensive position, plan for the unexpected, and protect their critical assets and their image. By automating their threat intelligence operations, they can turn the tables and provide a consistent response to threats that happen during their operational hours. If you want to delve deeper into threat intelligence, you can explore these community repo resources, or learn more about how Torq can help.