Modern Security Operations Center Framework

This post was previously published on The New Stack

The Origins of Modern Cloud/IT Environments

With agile development, the software development life cycle has evolved, with a focus on customer satisfaction to enhance product features based on user feedback. This helps shorten the time to market, since teams can release a minimally viable product, then continuously improve its features. The agile technique encourages team cooperation through sprints, daily standups, retrospectives, testing, quality assurance and deployment. Through continuous integration and continuous development (CI/CD), along with the integration of security into operations, teams can deliver software faster. 

Yet, as more and more businesses adopt cloud computing, cybersecurity threats grow due to bad actors who target the security vulnerabilities of their complex hybrid infrastructures, which include public cloud services. Consequently, SecOps plays a crucial role in ensuring that DevOps teams prioritize security.  Modern security tools and frameworks aid SecOps teams, providing zero-downtime deployment, automated deployment and reduced attack surfaces.

Security Operation Center (SOC) and SecOps Evolution

Traditionally, security was an afterthought in most IT environments. It was structured as a siloed department and only came to the forefront when an incident had been discovered. Key organizations, such as government agencies, had network operations centers (NOCs), which focused on detecting incidents in their network devices. 

While traditional security operations centers (SOCs) were reactive to security threats and attacks, the next generation of SOCs takes a more proactive approach using automation and real-time security information and event management (SIEM). Modern SOCs are more sophisticated. They emphasize collaboration between people, technologies and processes to thoroughly monitor and investigate security events in real time, which enables them to prevent, detect, and respond to cyberattacks. They go above and beyond standard security compliance by establishing cyber defense and incident response centers that collaborate to manage threat intelligence and system security.

Cyber warfare has never been more complex, and the bad news is that it is only becoming more advanced and more pervasive. Security operations and SOCs are under increasing pressure to identify and respond to threats quickly, as well as to harden defenses against a growing range of threats. As a result, the  IT frameworks D3FEND and MITRE ATT&CK have been developed to solve many problems. These tools are used to detect, debug and protect against security breaches and attacks in today’s cloud systems.

To be successful, modern SecOps teams must be given more authority to use security solutions that replace “black box” security teams with automation, threat hunting, vulnerability management and real-time monitoring. 

What Is the MITRE ATT&CK Framework?

MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) is a knowledge source that assists SecOps intelligence decision-makers. It’s a behavioral threat modelused to develop, test and improve behavior-based detection capabilities over time. Penetration testers use the MITRE ATT&CK methodology to orchestrate their attacks and locate vulnerabilities in your infrastructure, then exploit them and report their findings. It helps enterprises understand malicious behaviors and mitigate the risks and threats they face.

The MITRE ATT&CK framework employs a set of methodologies and tactics to identify compromise indicators , including defense evasion techniques to evade detection, lateral movement techniques to spread throughout your infrastructure and exfiltration to steal data. Employing these adversarial tactics helps enterprises create a comprehensive list of known prospective attack techniques, which SOC teams can use to find potential weaknesses, then focus on developing defensive measures.  

What Is the MITRE D3FEND Framework?

MITRE D3FEND is a companion of MITRE ATT&CK. It uses a knowledge graph to provide SOC teams with defensive countermeasures to harden and protect their infrastructures based on the identified attack tactics and techniques. D3FEND complements the threat-based ATT&CK model by providing ways to counter common offensive techniques, thereby reducing a system’s potential attack surface.

How Can Modern SOCs Benefit from MITRE ATT&CK and D3FEND Frameworks?

Security breaches, which can result in serious consequences such as lost customers, lost income and damaged reputations, remain a constant threat. SOC teams can use the ATT&CK framework to measure their effectiveness in detecting, analyzing and responding to cyber intrusions. They can also use ATT&CK to better understand and document adversarial group profiles so that they can simulate possible adversarial attack scenarios and come up with cybersecurity controls. Modern SOC teams can use MITRE D3FEND to implement security solutions with the detailed countermeasures that it provides. Using the ATT&CK and D3FEND frameworks together will help teams not only identify defensive gaps, but also make more strategic security tooling decisions.

One key concept behind the MITRE ATT&CK and D3FEND frameworks is threat hunting. Threat hunting tools search  for cyber threats lurking undetected in network and security defense endpoints. Here at Torq, we provide a threat-hunting tool that will quickly automate your SOC workflows in extended detection and response; security information and event management; and endpoint detection and response.  Start automating today!

Automated Just-In-Time Permissions Using JumpCloud+Torq

For security teams, properly managing which users can access resources and governing the level of access those users have is about as basic as locking the door at night. 

Understandably then, there are thousands of options available to fine-tune or revoke access, and it’s likely that issues come up daily for most companies—if not hourly. But chasing alerts every time a user needs access to a new resource or manually auditing systems to see what entitlements they already have are poor uses of an analyst’s time. These are the classic signs that a process needs to be automated. 

Torq can help your team automate these controls in a number of different ways using pre-built workflows. In combination with JumpCloud, organizations can easily implement layers of security that make sense to both end-users and auditors. By quickly moving cloud-based identities among different groups, IT admins and security teams can add in the conditions of access that make sense for each resource, regardless of where the users are. 

This blog will focus on just-in-time (JIT) access for temporary permissions using JumpCloud user groups and Slack. JumpCloud user groups can allow access to SSO applications, provision users, authenticate network access, and even create local profiles across Mac, Windows, and Linux devices. In this example, we will show how to easily provision access to SSO applications.

How Torq Automates JIT Permissions

This workflow runs when credentials are requested by a Slack command, and if approved, adds users to a JumpCloud user group. When the time limit for access has expired, the user will be automatically removed from the user group, revoking permissions and closing security gaps. 

How it works:

  1. A user invokes a Slack command, triggering a temporary access request. 
  2. JumpCloud then pulls the groups that the user already belongs to, and Torq compares them to applications that have been configured to provide JIT access.
  3. Slack asks the user which group they would like access to and for how long.
  4. Torq then sends the access details to a designated Slack channel and requests approval on behalf of the user.
  5. If access is rejected, or the request times out, the user is notified through Slack.
  6. If access is approved, the user is added to the group in JumpCloud and receives a notification. 
  7. When the predetermined timer expires, Torq sends a command to JumpCloud to remove the user from the group, and the user is notified through Slack.

Workflow builder in Torq showing the steps for just-in-time access

As with all workflow templates, users can modify this to align with organizational policies. For example, if a log event is required, steps can be added to log the access into ServiceNow or Jira. 

Using this workflow helps consolidate work into a single medium—Slack channels—and automates user-driven tasks like requesting access. But it still maintains the crucial “human in the loop” for determining if the access is appropriate and/or necessary. Users get access when they need it, and analysts avoid the toil of small tasks. Another win for automation.

Get the JIT Workflow Template

Torq users can find this JIT workflow in the app along with many others for managing identities and access, like Suspend Accounts with No Logins after N Days and Ask User to Confirm Failed Login Attempts.

Get Started Today

Not using Torq yet? Get in touch for a trial account and see how Torq’s no-code automation accelerates security operations to deliver unparalleled protection.

Open Source Cybersecurity: Towards a Democratized Framework

This post was previously published on The New Stack

Today, anyone can contribute to some of the world’s most important software platforms and frameworks, such as Kubernetes, the Linux kernel or Python. They can do this because these platforms are open source, meaning they are collaboratively developed by global communities.

What if we applied the same principles of democratization and free access to cybersecurity? In other words, what if anyone could contribute to security initiatives and help build a cybersecurity culture without requiring privileged access or specialized expertise?

To explore those questions, it’s worth considering the way that open source has democratized software development and comparing it to the potential we stand to realize by democratizing security. 

Although the connection between open source software and democratized security only goes so far, thinking from this angle suggests a new approach to solving the cybersecurity challenges that have become so intractable for modern businesses.

So, let’s take a look at how open source democratized development and what it might look like to do the same thing for security.

Open Source and the Democratization of Software Development

Until the origin of GNU and the Free Software Foundation in the mid-1980s, no one thought of software as something that anyone could help design or create. Instead, if you wanted to build software, you needed membership in an elite club. You generally had to be a credentialed, professional programmer, and you typically had to have a job that put you in a position to create software.

GNU began to change this by allowing volunteers to build software, like the GNU C compiler, that eventually became massively important across the IT industry. Then, in the early 1990s, the advent of the Linux kernel project changed things even more. The Linux kernel was created by an undergraduate, Linus Torvalds, whom no professional programmer had ever heard of. If a twenty-something non-professional Finnish student could create a major software platform, it was clear that anyone could.

Today, anyone still can. You don’t need a special degree or professional connections to contribute to a platform like Kubernetes, as thousands of independent programmers do. Nor do today’s users need to rely on a handful of elite programmers to write the software they want to use. They can go out and help write it themselves via participation in open source projects. (Of course, the big caveat is that you need special skills to write open source software. But we’ll get to that point later.)

As a result of this democratization of development, it has become much easier for users to obtain the software they’d like to use, as opposed to the software that closed-source vendors think they should use. Open source has also turned out to make software development faster, and it tends to lower the overall cost of software development.

Towards an Open Source Cybersecurity Framework

Now, imagine what would happen if the world of cybersecurity were democratized in the way that software development has been democratized by open source.

It would create a world where security would cease to be the domain of elite security experts alone. Instead, anyone would be able to help identify the security problems that they face, then build the solutions they need to address them, just as software users today can envision the software platforms and features they’d like to see, then help implement them through open source projects.

In other words, users wouldn’t need to wait on middlemen — that is, the experts who hold the reins — to build the security solutions they needed. They could build them themselves.

That doesn’t mean that security experts would go away. They’d still be critical, just as professional programmers working for major corporations continue to play an important role alongside independent programmers in open source software development. 

But instead of requiring small groups of security specialists to identify all the cybersecurity risks and solve them on their own, these tasks would be democratized and shared across organizations as a whole.

The result would be a world where security solutions could be implemented faster. The solutions would also more closely align with the needs of users, because the users would be building them themselves. 

And security operations would likely become less costly, too, because the work would be spread out across organizations instead of being handled only by high-earning security professionals.

The Differences Between Open Source and Security Democratization

It’s important, of course, not to overstretch the analogy between open source democratization and security democratization. In particular, there are two main differences between these concepts.

One is that, whereas open source software can be built and shared by communities at large, security is something that is mostly used only internally. The security workflows that users might define using security democratization tools would only apply inside their own companies, rather than being shared with users at large.

The other, bigger limitation is that it takes special skills to build software. While it’s possible for non-coders to contribute to open source projects by doing things like writing documentation or designing artwork, most contributions require the ability to code.

In contrast, security democratization doesn’t have to require specialized security skills on the part of users. By taking advantage of no-code tools that let anyone define, search for and remediate security risks, businesses can democratize their security cultures without having to teach every employee to be a cybersecurity expert.

What this means is that, although it may seem far-fetched at first glance to democratize security, the means for doing so — no-code automation frameworks — are already there. They just need to be placed in the hands of users.

Conclusion

Over the course of a generation, the open source software movement radically transformed the way software is built. Although plenty of closed-source code continues to exist, programmers today can contribute to critical software platforms that operate on an open source model without requiring special connections or credentials. The result has been software that better supports users, and that takes less time and money to build.

Businesses can reap similar benefits in the realm of security by using no-code security automation tools to democratize their approach to security. When anyone can define security requirements and implement tools to meet them, security becomes more efficient, more scalable and better tailored to the needs of each user or group.

Automating Cloud Security Posture Management Response

When we discuss cybersecurity and the threat of cyber attacks, many may conjure up the image of skillful hackers launching their attacks by way of undiscovered vulnerabilities or using cutting-edge technology. While this may be the case for some attacks, more often than not, vulnerabilities are revealed as a result of careless configuration and inattention to detail. Doors are left open and provide opportunities for attacks. The actual exposure in our systems is due to phishing schemes, incorrectly configured firewalls, overly permissive account permissions, and problems our engineers don’t have time to fix.

This article will introduce you to an actionable strategy to protect your environment using Cloud Security Posture Management (CSPM). We’ll describe what CSPM is and why it’s essential for your organization to implement it. We’ll also cover some of the reasons why organizations fail to implement such a strategy effectively. Finally, we’ll explore practical and straightforward approaches that your organization can pursue right away to protect your digital assets and your consumer’s data.

What Is Cloud Security Posture Management (CSPM) and Why Is It Important?

When compared with a traditional data center, the cloud offers significant advantages. Unfortunately, our approach to securing a conventional data center doesn’t translate well to the cloud, so we need to recalibrate how we think about and enforce security. CSPM or Cloud Security Posture Management is the process of automating the identification and remediation of security risks across our ecosystem.

Cloud providers such as Amazon Web Services (AWS) and Google Cloud provide an expansive range of services and capabilities. While the cloud host takes care of patch management and ensuring availability, it’s the user’s responsibility to protect their data and services from malicious actors. In recent years, several high-profile data breaches have come about due to improperly configured storage buckets or through accounts with more access than required. 

Why Is CSPM Challenging and What Does It Cover?

The public cloud offers more than simply virtual servers and databases. Modern applications are composed of a multitude of services, each with unique permissions and access-control policies. The age of DevOps requires development teams to have access to a wide range of permissions and the organization’s trust that they’ll use that responsibility carefully. Unfortunately, mistakes happen; and especially in a model where distributed user accounts and systems constantly evolve, configurations and changes require a monumental effort to manage and monitor these accounts.

Visibility would be the core principle if we had to simplify the challenges of protecting your digital ecosystem into a single concept. A comprehensive CSPM strategy consists of providing visibility into all aspects of your environment. This visibility includes:

  • Account and Permission Management
  • Service Configuration
  • Patch and Security Update Management
  • Effective and Efficient Problem Resolution
  • Vulnerability Scans of Applications and Third-party Libraries.

A CSPM solution provides visibility into each of these aspects, and tracks anomalies and suspicious changes as they happen. A CSPM automatically remediates potential problems and threats where possible, and raises appropriate alerts if automatic remediation isn’t available.

Implementing a CSPM Strategy

Implementing a successful CSPM strategy may seem a little daunting given the scope of what the solution needs to cover and the importance of achieving comprehensive coverage of your entire ecosystem. Most of the large cloud providers have services that can monitor changes within their environments. While they effectively monitor most services within their domain, they are limited to those same services. Ideally, you want to partner with an organization that has invested time and resources into CSPM solutions that can span the breadth of your organization.

Equally important to the coverage of the solution is the capacity for automation. Automated processes can be used for monitoring, analyzing, and remediation when possible. Given the dynamic nature of most environments, manual tracking may not be able to keep up with changes as your organization grows. Additionally, as with configuration and operational tasks, there is always the chance for human error, resulting in missed alerts or worse problems that are identified and then forgotten, as additional tasks and issues arise.

A successful CSPM solution uses automation extensively, monitoring and detecting problems and automatically remediating such problems or isolating them until the appropriate personnel can address them.

Practical CSPM Use Cases

Implementing an automated CSPM solution will alert you to potential vulnerabilities in your systems, misconfigured resources, and potentially harmful changes. Still, there is more to a CSPM solution than just detection and reporting.

Once the CSPM solution discovers an issue with your environment, a well-designed system will also assist with managing issues, performing such tasks as:

  • Filtering issues by priority and severity so that you can devote resources to the most critical issues first.
  • Organizing related issues and ensuring that issues aren’t duplicated across multiple systems.
  • Periodically performing additional scans and tests to determine whether vulnerabilities and issues have already been addressed.
  • Managing the assignment of issues to the appropriate owner within the organization and escalating tickets that might not be receiving proper attention.

In a nutshell, your CSPM solution should remove much of the guesswork associated with security scans, configuration management, and issue resolution. The system should handle many mundane tasks and only engage your engineers when necessary. This approach will free you and your organization to focus on delivering additional value to your customers and improving your existing offerings.

Learning More

As leaders in the field of automation, Torq is uniquely positioned to help you find and implement a CSPM solution that addresses your organization’s needs. Reach out to Torq to learn more about the services they offer and how they can work with you to improve the security of your systems and manage your cloud environments.

Adopt the “Beyonce Rule” for Scalable Impact

Recently, I started to read the invaluable book Software Engineering at Google. It’s a great book by Google, describing their engineering practices across many different domains.

One of the first chapters discusses the matter of making a “scalable impact,” which I find very interesting, and something that I believe has been overlooked by many organizations.

What is Scalable Impact?

Creating “scalable impact” is making a change that will improve your organization’s engineering practices without investing unnecessary effort for each new team member.

First, let’s review some examples that don’t demonstrate “scalable impact”.

What Hinders Scalable Impact?

1. Code Reviews

Code reviews have many advantages. Code reviews allow teams to share knowledge, catch design flaws, enforce common patterns and practices. Seems like a good idea, right? Well, the problem is —these don’t scale. The larger the team, the larger the effort, and this scales linearly with each new developer.

2. Manual QA

Similar to code reviews, manual QA for each release doesn’t scale. As a team grows, release velocity increases. As release velocity increases, more releases require manual QA —creating a bottleneck and single point of failure.

3. Manual deployment approvals

In many organizations, only a small, dedicated team can perform the actual deployment to production. Just as with manual QA, increased release velocity brought on by team growth turns this into a function that blocks scale.

4. Excessive documentation

Documentation is useful —it allows teams to share knowledge in-house and publicly without having to be there to personally explain things. But, as you create more and more documentation, there are two downsides: 1. You have to keep it up to date 2. Devs need to read it. And we DEVs, (or we as human beings…) can be lazy. We don’t like to do things that require a ton of effort; we love to take the easy way, if possible. We don’t read the docs and we definitely don’t update the docs if we change something. So in many cases, the end result is a waste of time, or an old and out-of-date document that no one even reads. In the end, the conventions you created may not be used anywhere.

How to Make Scalable Impact

Okay, so how exactly do you make scalable impact then? At Torq, we’ve adopted a number of practices that help us realize scalable impact and set our team up for successful growth. I’ll highlight each of these examples below, and talk through the details of them in a future post.

1. Centralized Linting

Let’s say that one day you decide all your developers have to avoid using errors.Wrapf and instead use fmt.Errof. At that point, most organizations tend to create a new convention page and write that down there. Later, a nice Slack message will be sent to the #rnd channel. Something like this:

“Hi there, we decided to stop using errors.Wrapf, please use only fmt.Errorf from now on. See the conventions here: …”

How many of you are familiar with this practice? If you’re familiar with it, you probably also realize this won’t work. 

Why, you ask? Because human beings don’t work well with rules, unless they’re reinforced. That wiki page describing the newest convention? It’s already depreciated the moment you wrote it.

So how do you solve that issue, then? My advice: Come up with a centralized linting solution. By centralized, I mean one that immediately affects all new builds, without any changes to your projects.

Returning to the example above, with centralized linting, you change your global linting rules. That means you immediately avoid any new usages of the old convention. No questions asked, and nothing to remember. It’s as simple as that — the PR won’t be mergeable unless the new convention is used. No documentation to maintain, no convention to teach new recruits. Your documentation is now the linting rules code. 

There you have it: Scalable Impact.

At Torq we use ReviewDog to achieve this, which I’ll describe in detail in a later post.

2. Unified/Reusable CI pipeline

Another story you may all be able to relate to: One day your CTO reaches out and asks for the full list of dependency licenses used across ALL your codebase.

So then you start googling and you find an awesome tool doing exactly that. But now you’re stuck. You have to run that tool and collect the results for ALL your projects, and we’re talking about minimum 50+ (or more for larger organizations).

Here’s the (happy!) twist: With unified CI Pipelines, this task becomes easy.

By unified, I mean maintaining a common baseline for all your CI pipelines. One that you can change from a single location, by doing only a single change.

To solve the above issue you will add that logic for license extraction to your common base and execute that as part of your CIs. Then, just let your CI do the rest.

Another (real) example: Let’s say you want to change your unit-testing runner.

You decided that gotestsum is the real deal, and your teammates chime in: “That old go test is useless. We MUST move to the new shiny tool.”

More opportunity for scalable impact: Just change the test part of your unified CI, and all your projects will use gotestsum instead of go test. 

To achieve this, we use Makefile inheritance and CircleCI orbs. Again, I’ll dig into this in a future post.

3. Automated E2E Tests

Nothing new here —each and every deployment should pass the same test suite. Again, every deployment.

Green -> deployed

Red -> rolled back

No “sanity suite”,  no “Ah, this one is flaky, you can go ahead and deploy”. No user intervention. Of course, this means your complete E2E suite should be fast and reliable (I use under 5 minutes as a rule of thumb).

Adopt the “Beyonce Rule”

“If you liked it then you shoulda put a ring on it!” said the mighty Beyonce in her famous song, Single Ladies (Put a Ring on It).

Later, malicious DEVs took that line and rephrased it to “if you like it, put a [test, lint, e2e, ticket] on it!”

Put plainly, new investments or changes require you to put the right framework in place to make them scalable. Getting started with this requires having the right tools; but after that, it’s easy to adopt the practice.

Found a new issue that can be caught by a listing rule? Easy! Add the lint rule to your centralized linting solution.

Want to add a new tool that tracks the test coverage? Simple! Change the unified CI configs.

The upfront investment here helps the entire team grow, and delivers returns (in reduced toil, and increased velocity) over time.  Applying the Beyonce Rule turns your team from Danity Kane to Destiny’s Child. It becomes super easy to add/change the existing conventions and processes. The Beyonce Rule is cost-effective and can be easily adopted.

Threat Hunting Like a Pro — With Automation

It’s no secret that cyber attacks are on the rise. Not only are they becoming more frequent, but the malicious actors who mount these attacks are constantly improving their skills and evolving the tools in their arsenals. Protecting your organization is challenging at best; especially since we measure the return on investment for cybersecurity as ‘preventing losses’ instead of ‘increasing revenue.’

Threat hunting is a proactive approach to securing your systems. Unfortunately, manual threat hunting can be time-consuming and labor-intensive. Combine that with a shortage of trained and talented threat hunters in our industry, and it is apparent that we need a different and more effective approach to the problem. This article will investigate the challenges involved with threat hunting and explore how you can automate the process of threat hunting in your organization to proactively improve your applications and systems’ security without requiring an excessive investment.

What Is Threat Hunting?

Cyber attacks come in many different forms. Aggressive tactics, such as those used in a distributed denial of service (DDoS) attack, are easy to identify. However, it is the more subtle attacks — the ones that quietly infiltrate your systems, compromise security from the inside, and steal data — that are the most dangerous and the hardest to detect. Threat hunting is how organizations identify and mitigate these threats.

Successful cyber attacks require patience, combined with a variety of tools and intelligence. The attacker might start by compromising an authorized user’s account using a phishing scheme or social engineering. Once they assume a valid identity, they attempt to elevate their privileges, leverage known vulnerabilities, or install malware to find and extract data within the corporate environment. Ideally, the attacker tries to accomplish all of this without triggering traditional security monitoring systems.

The threat hunter uses monitoring, identification of suspicious patterns, and other proactive tools to identify and mitigate such attacks before compromising the system’s integrity. Like the would-be hacker, the threat hunter requires patience, cunning, and access to a comprehensive set of tools. Automation and machine learning further enhance the role of threat hunting by gathering data, identifying suspicious patterns in real-time, reducing human error, and freeing up resources to improve existing processes.

Why Threat Hunting Can Be Challenging

Public cloud providers, such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure, help companies expand their capabilities, and scale in unprecedented ways. Unfortunately, this potential growth also increases the attack surface for an organization’s systems. The attack surface isn’t limited just to the infrastructure hosting applications and data. Malicious actors use email, identity management, and all other corporate systems as part of an attack on the organization.

It is incredibly challenging to support an effective threat hunting initiative, given the extensive nature of an organization’s system, the evolving nature of attacks, and the expense of hiring well-trained experts from a limited talent pool.

Leverage the Experience of Experts

You don’t have to stand alone against attacks on your organization. Fortunately, cybersecurity is a common problem, and as such, there are experienced and talented experts who dedicate their time to supporting organizations like yours. You can supplement your security initiative by utilizing these tools directly or by partnering with an organization like Torq that provides tooling and automation for a more comprehensive solution. 

When looking for a security solution, ideally you want to find one that offers Extended Detection and Response (XDR) integrations to monitor, detect and respond to potential attacks on:

  • Network Endpoints
  • Cloud and Data Center Workloads
  • Corporate Firewalls
  • Identity Management Systems
  • Email 

Information and anomalies from each system can be correlated and analyzed to identify potentially malicious activity and instances of compromise.

Gaining the Advantage with Automation

XDR security solutions provide your threat hunting team with the tools they need to actively monitor and detect threats to your systems. When you integrate them with automation tools, such as those available from Torq, you create a scalable, efficient system that can work around-the-clock to keep your systems secure.

Let’s look at some potential use cases that you can address with an automated threat detection solution. The most critical use of such a system is to identify events or activities that might indicate a potential threat. The system collects this information by querying events and agents within the network, and enriching them with related information. External services such as Joe Security and VirusTotal, among others, are used for a more complete picture of the threats involved.

The comprehensive alert information is automatically correlated and analyzed against all events to identify and provide comprehensive alerts about possible attacks. For known and familiar attacks, the system can automatically remediate the attack, and suppress warnings before the support team is notified. 

Once the system identifies an attack, it is critical to respond as quickly as possible. Using an automated process to isolate and quarantine suspicious human and machine entities, processes, or emails within your system reduces the blast radius of the attack and limits additional exposure.

Supporting Constant Change

Our systems have evolved dramatically from the old monoliths with periodic changes based on a release schedule. In the modern era of DevOps, our systems morph and change constantly. Automating security scans on new and existing infrastructure is critical to ensure the integrity of your environment. As you add new devices and remove retired ones, you can automate updates to allow-lists while at the same time updating deny-lists based on indicators of compromise (IOC). 

As you identify vulnerabilities and create or modify security rules for different user groups or security groups, an effective automation suite will facilitate the system’s propagation of the necessary changes. Automating these processes ensures that your systems remain up-to-date with the latest security patches and changes.

Learning More

Even though the systems we develop and support are unique and different depending on our client’s needs, we share the common need for security and to protect the data with which our clients entrust us. We don’t need to face these attacks alone, and partnering with experts in security and automation can help us better protect and secure our systems.

If you’d like to learn more about how Torq can help you more effectively hunt threats, reach out to us for no-code automation to support your security teams, and keep you one step ahead.

 

5 Security Automation Examples for Non-Developers

If you’re a developer who lives and breathes code all day, you probably don’t mind having to write complex configuration files to set up an automation tool or configure a management policy.

But the fact is that many of the stakeholders who stand to benefit from security automation are not developers. They’re IT engineers, test engineers, help desk staff, or other types of employees who may have some coding skills, but not enough to generate the hundreds of lines of code necessary to set up the typical automation tool.

Fortunately for non-developers, there are ways to leverage security automation without drowning in manually written code. Here’s a look at five examples of how non-developers can take advantage of security automation while writing little, if any, code.

1. Low-Code Configuration for Detection Rules

Detection rules are the policies that tell security automation tools what to identify as a potential breach. They could configure a tool to detect multiple failed login requests from an unknown host, for instance, or repeated attempts to access non-existent URLs (which could reflect efforts by attackers to discover unprotected URLs that contain sensitive content).

Traditionally, writing rules to detect events like these required writing a lot of custom code, which only developers were good at doing. A faster and easier approach today is to use a low-code technique that allows anyone – not just developers – to specify which types of events security tools should monitor for and then generate the necessary configuration code automatically.

When you take this approach, any engineer can say, “I want to detect repeated login failures” or “I want to detect high rates of 404 responses from a single host,” and then generate the requisite code automatically.

2. Automated Incident Response Playbooks

Along similar lines, you don’t need to be a developer to specify which steps automation tools should take when they detect a security incident.

Instead, non-developers can indicate their intent, which may be something like “I want to block a host’s IP range if the host is previously unknown to the network and more than three failed login attempts originate from the host in under a minute.” Then, automation tools will generate the code necessary to configure security tools to enforce that rule instantly whenever the specified condition is triggered.

3. Automatically Trigger Endpoint Scanning

Whenever a possible security incident arises, automatic scanning of impacted endpoints is a basic best practice for determining the extent of any breach and isolating compromised hosts from the network.

However, performing endpoint scanning across a large number of hosts can be difficult. It has traditionally required either a large amount of manual effort (if you perform each scan by hand) or the authoring of code that will tell your scanning tools to run the scans automatically based on the host and access data you give them. Either way, the process was slow and required collaboration between multiple stakeholders.

However, by using an approach where endpoint scans are configured and executed automatically teams can perform this important step much faster. For instance, if helpdesk staff who are supporting end-users notice the possible presence of malware on a user’s device, they can automatically request scans of all endpoints associated with that user (or with the user’s group or business unit) rather than having to ask developers to configure the scans for them.

4. Automatically Generate Security Testing Code During CI/CD

The testing stage of the CI/CD pipeline has traditionally focused on testing for application performance and reliability rather than security.

That’s not because test engineers deem security unimportant. It’s because most of them aren’t security engineers, and they don’t want to spend lots of time writing code to automate pre-deployment security testing on top of performance testing.

This is another context in which automatically generated code can help integrate security into a process in which it has traditionally played little role due to the complexity of generating the necessary security code. When test engineers can indicate the types of security risks they want to test for within application release candidates (like injection vulnerabilities) and then automatically generate the code they need to run those tests, it becomes much easier to make security testing part and parcel of the broader CI/CD testing process.

5. Update Security Automation Rules for a New Environment

Your business may already have security configuration code in place. That’s great – until you decide to make a change like moving to a new cloud or migrating from VMs to containers, at which point your rules need to be rewritten.

You could update the rules by having security analysts and developers work together tediously to figure out what needs to change and how to change it. Or, you could use low-code security automation tools to generate the new code automatically. There may be some tweaks left for your team to perform manually, but the bulk of the heavy lifting required to secure your new setup can be performed automatically.

Extending Security Automation to Non-Developers

Security automation is a powerful methodology. But given that non-coders are often the stakeholders most in need of security automation, platforms that require stanza upon stanza of manual configuration code to do their job make it difficult – to say the least – for most businesses to leverage security automation to the fullest effect.

That’s why the future of security automation lies in solutions that generate the necessary code and configurations automatically, allowing all stakeholders to implement the security checks and responses they need in order to protect their assets without having to learn to code or lean on developers to write code for them.

5 Automated Anti-Phishing Protection Techniques

In an age when attackers create over a million phishing sites each month, and phishing serves as a beachhead for 95 percent of all attacks against enterprise networks, how can businesses respond?

Part of the answer lies in educating users to recognize and report phishing, of course. But user education only goes so far – particularly because the same statistics cited above show that, on average, only 3 percent of users will report phishing emails. Strong anti-phishing education may increase that number, but you’re still fighting an uphill battle if you rely on your users as your primary means of defense against phishing.

Instead, teams should lean as much as possible on automated anti-phishing techniques. By using automation to detect and respond to phishing attempts, businesses can stop the majority of phishing messages before they ever reach end-users.

Keep reading for an overview of five practical strategies for automatically detecting and managing phishing attacks.

Filter Messages Based on Multiple Attributes

Most security and IT teams know that they should automatically filter incoming email for signs of malicious content.

However, the mistake that many teams (and security tools) make in this regard is focusing just on one attribute of messages – typically, the content of the message itself – when performing scans.

Although scanning for signs of phishing like misspelled words or suspicious domain names is one way to detect phishing messages, it’s hardly the only one. A better approach is to evaluate each message based on multiple attributes – its content, the domain from which it originated, whether or not it contains an attachment, which kind of attachment, and so on – to build a more informed assessment of whether it may be phishing.

This multifaceted analysis is especially important for automatically catching phishing attempts, given that attackers have gotten much better at crafting good phishing content. The days are long gone when simply scanning email for strings like “Nigerian prince” guaranteed that you’d catch the phishers.

Detonate Attachments in Sandboxes

If your security tools detect possible malicious content but you need an extra level of confirmation, you can take the automated response a step further by “detonating” attachments — or downloading and opening any links that the phishing content included —  inside a sandboxed environment.

By installing the malicious content in a safe, isolated location and evaluating what happens, you can detect anomalies or attack signatures that will confirm that the content is indeed malicious.

Of course, the original content should remain quarantined and inaccessible to your end-users while your tools perform the sandboxed detonation. You can either safely release the content to users or block it definitively, pending the results of the sandbox analysis.

Block Sender Names and Domains Automatically

If you detect a phishing attempt, you can minimize its impact by using automation tools to block the sender’s name and domain as quickly as possible. Doing so minimizes the number of emails or other messages that the phishers are able to send to your users. It also disrupts their ability to engage with any users whom they successfully trick into responding to them.

And, by blocking not just malicious sender names but entire domains, you make it much harder for the phishers to continue their attack using multiple accounts.

Automatically Scan Affected Endpoints

Another step that you should take immediately and automatically upon detecting a phishing email is to scan any endpoints – such as the affected user’s PC or phone – that are associated with it.

Immediate scanning will maximize your chances of detecting and isolating any malware that the phishers may have been able to deploy.

Reset Affected User Credentials

Along with scanning impacted endpoints, you should also use automation tools to reset the login credentials for users who may have been impacted by a phishing attack. By logging them out of any open sessions and forcing a password change, you also mitigate the ability of attackers to exploit accounts that they compromised through phishing.

Automation as the Future of Anti-Phishing

The phishers are only going to get better at what they do. To keep up, businesses need to become more efficient in their responses. That means adopting automated anti-phishing tools that allow teams not just to detect phishing attacks as quickly and as accurately as possible, but also to minimize the potential impact of a successful phishing breach on the IT estate.