Automated Threat Intelligence Enrichment: An Overview

This post was previously published on The New Stack

Discovering security threats is good and well. But, in many cases, simply knowing that a threat may exist is not enough.

Instead, you also need threat intelligence enrichment. Threat enrichment plays a critical role in helping to evaluate and contextualize threats, root out false positives and gain the insights necessary to mitigate risks as efficiently and quickly as possible.

Keep reading for a primer on how threat enrichment works, why it’s important and where to look to get key insights from threat intelligence data.

What Is Threat Intelligence Enrichment?

Threat intelligence enrichment is the process of gaining context through security threat data in order to better understand the threat.

For example, imagine you’ve detected port scans against your servers. You know the IP addresses of the hosts from which the port scans originated, but you don’t know much more than this.

In this case, threat intelligence enrichment could include insights such as where the offending servers are located and which operating systems they are running. This information may, in turn, be useful for determining whether you’re dealing with a probe against your network from a generic botnet or a port scan operation that originates from a more sophisticated group of attackers, like state-sponsored actors. Threat intelligence enrichment could also inform you whether port scans like the type you’ve experienced are associated with any specific known risks, like a pervasive malware attack recently launched against other organizations.

All these additional threat data insights would provide you with the information you need to react as intelligently and efficiently as possible to block the threat. They would also help you know how dangerous the threat might be. For example, a threat from a generic botnet is probably less risky than a targeted attack by sophisticated threat actors, and threat enrichment helps you know the difference.

Which Threat Enrichment Data Do You Need?

The data that threat intelligence enrichment provides can vary widely in scope and form. In general, however, the more data you have to contextualize a threat, the better.

At a minimum, threat enrichment data should include information about where a threat originated, which resources it affected and when the threat was detected or was active. You should also determine whether the threat was correlated chronologically with any separate attacks or attempted attacks that took place against other systems.

In some cases, threat intelligence enrichment can go deeper. For example, as noted above, threat enrichment might provide details about whether the pattern of security events you’ve witnessed is associated with a specific type of attack or group of attackers. This type of information is usually generated by security researchers who systematically study cyber events.

Threat Intelligence Data Sources

There are many ways to obtain threat data that enables threat intelligence enrichment. You should take advantage of all threat intelligence sources available to you.

Start by compiling as much data as you can from your own internal systems to provide context on a threat. This includes information like the time a threat was detected and the systems it affected, as noted above.

You can also use threat intelligence databases or feeds, which record information about known threat types, patterns and actors. Some of these sources, like MISP, are free and open source. Others are proprietary. They either require subscriptions , or are built into proprietary security platforms you use.

Automating Threat Intelligence Enrichment

You can, of course, manage your threat intelligence data manually by correlating and comparing it by hand.

That approach, however, is not practical at scale. A better strategy is to usef automation tools like Torq, which provides continually updated threat intelligence by automatically collecting enrichment data about threats that may affect your business. An automated approach to threat intelligence enrichment not only saves your team time, but also helps you take full advantage of as much threat data as possible.

Putting Automated Threat Enrichment to Use

To a large extent, you can automate the operationalization of threat intelligence data by using it to drive automated workflows. You can, for example, configure specific actions based on threat enrichment data.

In some cases, however, threat enrichment will require some manual effort. In the case of complex threats, your team will need to study enrichment data by hand to determine the best course of action.

But in general, you should take advantage of automation wherever possible. The more you automate, the faster you can block threats and the lower your overall security risk.

An Introduction to Automation Basics

Automation is a powerful tool. With some foresight and a little elbow grease, you can save hours, days, or even months of work by strategically automating repetitive tasks. What makes automation particularly beneficial is that it eliminates manual interaction with multiple systems.

Rather than manually uploading data to an event response system or notifying key support personnel of an incident, tying these tasks together through automation can reduce critical time and help resolve problems faster and more efficiently. But, before we can fill in the gaps between all of the platforms we are responsible for, we first need to understand how data moves around on the web and how we can use that process to our advantage.

How Automation Works

To begin automation, we first have to understand how data gets moved around on the web and what methods are available for connecting different services. In the real world, we have phone calls and emails to coordinate between different entities, but on the web, we have “protocols.” The most common protocols for moving data from one service to another are arbitrary HTTP requests, formal APIs, and webhooks.

HTTP Requests

The World Wide Web is built almost solely on the concept of HTTP requests. These are the requests that browsers make to push and pull data from the websites they are interacting with. While this data is often interpreted and rendered as HTML, so-called arbitrary HTTP requests can be used for much more.

Whenever data is requested to update a website (such as the current weather, news, or any other type of information), a simple GET request is made to a target address, instructing its underlying server that you are trying to retrieve some information. This information can be used to build internal dashboards and automation tools, or even for more advanced use cases like supplementing information within Torq.

On the flip side, when data needs to be pushed back out (such as when a web form is filled out and submitted), a POST request is made. This is a great solution for automatically filing support tickets or sending emails using third-party platforms without formal API support.

Third-Party APIs

Speaking of API support, one of the most common ways to automatically send and receive data within online services is through the use of formal APIs. An API (or, for the truly unititated, an Application Programming Interface), is a set of contracts that can be used to interface with a third-party application.

In the case of the web, APIs are generally powered by HTTP requests, but with a bit more formality. They offer official support for things like authentication, authorization, and rate limiting, in addition to stability and longer-term commitment to the request contracts. In other words, if you need to integrate with a third-party service to either push or pull data, using an API is far more stable than using arbitrary HTTP calls.

Webhooks

The unintuitively named “webhook” is a web-based endpoint that listens for data from some external source and reacts to it in a pre-defined way. Rather than manually (or repetitively) polling for data using an HTTP request or third-party API call, a webhook can be used to receive that information as soon as it is made available. Think of it like an API, but in reverse.

For example, Slack, Twitter, Stripe, and many other providers can send JSON-formatted payloads to any defined address, allowing you to update internal databases as information changes in real time, or even trigger Torq workflows for more complex operations.

Choosing Your Automation Methods

Connecting an unknown number of services and systems together is no mean feat. It can require a lot of coordination and planning to ensure that the defined automations work as expected, and even then, there is always a chance that the method used to integrate those services won’t stand the test of time.

So, how do you choose an automation method? When is a webhook a better choice over an API call? When should you use an API call over an arbitrary HTTP request? There are a lot of variables to take into account, but it generally comes down to weighing your needs against what’s possible.

Speed vs. Reliability

It’s no secret that an API is far more reliable than an arbitrary HTTP request, but sometimes developing against an API requires more work and overhead than a simple HTTP call. When connecting multiple services into a cohesive automation, determining your risk profile when it comes to speed versus reliability is key. Proofs-of-concept and non-mission-critical integrations are common scenarios where it might make more sense to quickly create an HTTP request instead of an equivalent API call.

Time-Sensitivity

Webhooks are incredibly useful when you need to react to the data as it changes, but this may not always be what you need. Maybe you want to update data on a slower cadence (such as daily or weekly), or maybe you want to batch the events that get triggered by changes in webhook data. A good rule of thumb is that if changing data has time-sensitive consequences like alerts or other automations, then webhooks are the way to go (if available); otherwise, you can feel free to pull the data down only when you need it.

Building Your Automation Workflow

Successful automation is a game of checks and balances and how you connect multiple systems together is often a balance of what is possible and what is practical. Sure, integrating with formal API specifications across all of your platforms might be the “right” way to do things, but it’s important to also consider the time cost of doing that work.

Sometimes, a combination of simple HTTP requests and webhooks can solve for your specific use-case while cutting down on implementation time. Ultimately, what matters is that you take into account how quickly you can spin something up given the available solution paths and how stable it needs to be when making decisions about integration.

To ensure a more consistent and frustration-free experience with automations, platforms like Torq can help establish these connections for you. Torq provides hundreds of pre-built integrations that can perform common security tasks across other tools. This eliminates the need to adapt or expand the interface to the API, and can even help consolidate multiple API calls into a single pre-built action.

Whichever you choose, preparing yourself with the knowledge of various benefits and liabilities for each model will start you on the path to success.

JSON Basics: Building Blocks for Workflow Automation

Automation workflows add a lot of value to an organization’s day-to-day operations. At a minimum, they streamline the execution of complex, multi-step processes, thereby allowing people to focus on higher-value tasks. On top of that, automation workflows can provide valuable insights through the metrics that they gather – including the number of requests, the date and time they were requested, the time it took to complete each request, who made the request, and much more.

At first, automated workflows functioned much like a basic assembly line, where workers only know how to perform one step in the whole process.  Now, modern automation solutions like Torq’s no-code platform are able to use the data passed into a certain step, together with the data generated in that step to make decisions about retries, failures, and next steps in the process.

In the beginning, these workflows functioned much like a basic assembly line, where workers only knew how to perform one step in the whole process. Now, modern automation solutions can use the data that’s being passed into a certain step, together with the data that’s generated in that step to make decisions about retries, failures, and where to send the request next.

This is especially important when it comes to security and auditing. While gathering more context to achieve a more complete record of what is happening, that context can also be used to decide what a requester can send or receive at each step. For example, while someone in the payroll department can access salary data that someone on the helpdesk cannot, both can see who the employee’s manager is.

JSON Basics

Since modern intelligent automation workflows are built around their data, that data needs to have a consistent format across all steps in the workflow. The format that Torq uses to contain that data is JavaScript Object Notation, better known as JSON. Because JSON is a text-based, self-describing format, it is easy to work with and very flexible. Compared to older and more formally structured formats like XML (eXtensible Markup Language), it requires less overhead to process and less storage space to archive. It is also easier to extend on the fly without needing to refactor multiple schemas to ensure backwards compatibility.

JSON Basic Structure

JSON is also human-readable, since it is based on the concept of key:value pairs and follows basic formatting rules. In this case, the only purpose of white space is to make it easier for humans to read. You must use a valid format, which normally means beginning and ending with curly brackets (i.e. { }), although square brackets (i.e. [ ]) are used in some cases. In addition, every element except the last one needs to be followed by a comma so that everyone knows there are more values to follow.

In the following JSON key:value example, the keys are shapes and the value of each key is the number of corners that the shape has.

{    “triangle”: 3,

    “square”: 4,

    “octagon”: 8

}

Basic JSON key:value Example

Data Types

When it comes to values, there are really only three data types. However, the values can be stored in arrays or objects, as defined below:

TypeDescriptionExample
StringAlphanumeric sequence (written in double quotes).“day”: “Saturday”“time”: “2021-03-11”
NumberAn integer (not in double quotes).“guestsNumber”: 25
BooleanValue can be true/false (not in double quotes).“surpriseParty”: false

Note: Numbers and Boolean values don’t need to be contained in quotes. However, string values and key names must be contained in quotes.

What Is a JSON Object?

JSON objects are items defined with multiple unique key:value pairs below them. Objects are contained within curly brackets, which is why most JSON data that is handled within these workflows will start and end with curly brackets. In fact, all of the data used within a workflow is one single object containing multiple sub-objects.

If we extend our previous example to include the number of sides as well as corners, we’ll end up with a unique object for each shape:

{    “triangle”: { “sides”: 3, “corners”: 3 },

    “square”: { “sides”: 4, “corners”: 4 },

    “octagon”: { “sides”: 8, “corners”: 8 }

}

JSON Object Example

JSON Arrays

Now you know how to create simple key:value pairs and unique objects. Sometimes, however, you need to record things as data using a common format, but the data itself is unique for each item. In such cases, you would define an array using square brackets ( [ ] ) around the set of key value pairs that need to be stored in the data.

For example, you can make a single object called “shapes” that contains an array for the data: 

{    “shapes”: [

        { “type”: “triangle”, “sides”: 3, “corners”: 3 },

        { “type”: “square”, “sides”: 4, “corners”: 4 },

        { “type”: “octagon”, “sides”: 8, “corners”: 8 }

    ]

}

JSON Array Example

How to Use JSON to Reference Data

Now that you know what the structure of JSON looks like and how easy it is to follow, we’ll explain how to address specific places inside the JSON data. To do so, you can either target the retrieval of the current state or grab an entire array.

Referencing JSON Objects

Let’s start with the basics of accessing data from an object. JSONpath is built using dot notation, which is a common type of syntax used in many programming languages to access the properties of an object. The basic JSONpath for accessing an entire object is “$.” These two characters will be at the beginning of every JSONpath in Torq.

For instance, to access the value of “triangle” in the first example (a simple JSON with a few key value pairs), you’d begin the path with the root “$.” and add the name of the key that you want to retrieve. So, in our example, “$.triangle” would return the value of 3.

Let’s say you wanted to access something that’s multiple levels down in the object. Using the JSON in the second example, you’d build on the base of “$.triangle” and add “.sides.” So, in this case, “$.triangle.sides” would return the value of 3.

Referencing JSON Arrays

Arrays are handled slightly differently, since they consist of multiple instances of data in a single object. To access data in an array, you can use square brackets and specify the desired record number. Or, if you leave the square brackets off, you’ll get the entire object back. For instance, using the JSON in the third example, you’d start with the base and ask for all of the records in shapes with the “$.shapes” JSONpath. You would use “$.shapes[0]” if you only wanted the first record. (In JSON, record numbers start at zero, not one.)

You can also pull back the number of sides in every record without pulling the rest of the data. The syntax is similar, except that you replace the index number with a colon to access all records. So, “$.shapes[:].sides” would return “{ 3,4,8 }” as the result.

Once you’ve mastered the art of navigating JSON, you can start to do more advanced filtering within JSONpath. Using the third, “$.shapes[?(@.sides>5)]” would return a record of every shape in the array that has more than 5 sides.

There are many online tools that you can use to validate that these examples really work (like JSONpath.com). In addition, Stefan Goessner has a great reference page with more examples of filtering and syntax.

JSON-Based Workflows

Now that you know what the data structure looks like in JSON, as well as how to reference specific values in that data with JSONpath, you have the option to build highly customized workflows to bring sanity and a sense of control to the most challenging manual work within your organization……Not that you’d need it, since Torq offers data-driven, zero-code security automation. They also provide documentation with more information about JSON and some advanced examples of JSONpath. To learn more about what Torq brings to the market, you can begin by checking out their Getting Started page.

SentinelOne Integrates with Torq, Streamlining SOC Workflows with Automated Incident Response

Joint Solution Leverages SentinelOne Security Data for Improved Alert Triage and Remediation

June 28, 2022 09:00 AM Eastern Daylight Time

MOUNTAIN VIEW, Calif.–(BUSINESS WIRE)–SentinelOne (NYSE: S), an autonomous cybersecurity platform company, today announced a new integration with Torq, a no-code security automation platform. The combination of SentinelOne and Torq allows security teams to accelerate response time, reduce alert fatigue, and improve overall security posture.

“SentinelOne’s powerful intelligence and protection helps security teams protect their employees and customers – no matter how complex the environment,” said Eldad Livni, Chief Innovation Officer, Torq. “With Torq, security teams can extend the power of SentinelOne to systems across the organization to automate workflows, respond faster, maintain/boost compliance to benefit from a proactive security posture.”

The SentinelOne integration with Torq combines SentinelOne’s powerful detection and protection with Torq’s no-code automation, enabling customers to limit alert fatigue, respond to threats at machine speed, and proactively identify and remediate risks. Torq makes it easy for security teams to create automated workflows, with a drag and drop workflow builder and hundreds of templates aligned with industry best practices and frameworks from MITRE and NIST. With robust data from SentinelOne, the Torq solution has access to more high-fidelity threat data for improved enrichment, accelerated response times, and alert fatigue reduction.

Torq workflows can listen for SentinelOne alerts, and ingest these to trigger action in any security or operations tool. The solution deploys out-of-the-box in minutes with no coding, installation, or ‘connectors’ needed. Key benefits of the integration include:

  • Real-time threat enrichment – automatically enrich alerts from any system with data directly from SentinelOne Singularity.
  • Automated remediation – remediate threats with fully autonomous or partially autonomous remediation workflows to accelerate mean time to respond.
  • Optimize SOC workflows – clearly and quickly orchestrate threat hunting, information sharing, and ticket creation for vulnerability management.
  • Bot-driven collaboration – Create no-code interactive chat bots that allow users to perform critical actions, run deep visibility queries, or control SentinelOne endpoints from within Slack or other chat tools.

“The SentinelOne-Torq integration provides joint customers with a powerful combination of best-in-breed automated security solutions,” said Ruby Sharma, Head of Technical Partnerships, SentinelOne. “Not only are customers utilizing industry leading endpoint protection and XDR, they also have access to innovative security automation tools that can accelerate workflow automation. We are pleased to make this integration available via the Singularity Marketplace, and we look forward to expanding our offerings to address even more use cases.”

The SentinelOne-Torq integration is available via SentinelOne’s Singularity Marketplace. For more information visit www.sentinelone.com.

About SentinelOne

SentinelOne’s cybersecurity solution encompasses AI-powered prevention, detection, response and hunting across endpoints, containers, cloud workloads, and IoT devices in a single autonomous XDR platform.

Contacts

Will Clark 
fama PR for SentinelOne 
E: [email protected]

Automatically Update URL Blocklists in Zscaler Using Torq

Blocking access to certain URLs is a simple, effective strategy for protecting users and the network. But, in a world where new and increasingly sophisticated scams seem to appear almost weekly, the task of maintaining that list can become overly burdensome when performed manually. 

Torq offers a number of ways to automate URL blocklist management, reducing manual effort and speeding up response to new threats.

How to automate URL blocklists using Torq

All Torq users have access to the pre-built workflow template Add and Remove URLs from the Global Blacklist (Zscaler). This flow will use the Torq chatbot to check URLs on request, then add to a global blocklist in Zscaler if needed. 

The default applications in this workflow are Slack and Zscaler, for chat and network security respectively. However, these can be customized with just a few clicks. 

Here’s how it works:

  1. A user sends a request to the Torq bot, either to check an unknown URL or to remove a previously-blocked URL. 
  2. If removing, the bot will return the associated information from Zscaler and ask to confirm removal before finishing the process. 
  3. If adding a new URL, Torq will return the associated categories from Zscaler, and ask to confirm the block request.
  4. Torq performs the requested action within Zscaler, then generates an updated list of blocked URLs. 
  5. The Torq bot then sends a confirmation of the request, along with the updated list for the user to reference. 

A portion of the Torq workflow for automating URL blocklists in Zscaler

This is a good example that shows how simple, off-the-shelf templates from Torq can help you automate security tasks in just a matter of minutes, giving analysts time back for higher impact work.  

Get the workflow template

Already a Torq customer? You can find this workflow and dozens more in the Torq template library. There you can find other network security workflows, like Analyze Suspicious URLs and IPs in VirusTotal, Block Malicious Files as IOCs using CrowdStrike, and Create IP Penalty Box with Timeout using Cloudflare

Get Started Today

Not using Torq yet? Get in touch for a trial account and see how the no-code security automation platform unifies your security, infrastructure, and collaboration tools to create a stronger security posture.

How to Automatically Suspend Inactive Accounts Using Torq

Contractors, freelancers, and other temporary workers have become essential parts of the modern enterprise. For IT and security teams, these individuals present unique challenges compared to full-time workers—and potential risks. 

The ‘offboarding’ process for these contractors is often less formal than bringing them on. Meaning, many just stop using their entitlements and accounts without actually closing them. These dormant accounts can pose serious risks to the organization.

A simple solution is to monitor these accounts and deprovision them after a set amount of inactivity. But for organizations with dozens or even hundreds of contractors at any given time, this solution does not scale. They need to automate the process in order to maintain efficiency.

How Torq automates contractor deprovisioning

Torq users can automate this process in just a few minutes with the workflow template Suspend Inactive Contractors After 7 Days of Inactivity

By default, the workflow runs once a day, and as the name suggests, it will check the past seven days for logins. Likewise, Okta and Slack are the default apps for identity provider and chat, respectively. 

All of these conditions can be customized based on your organization’s needs. So for example, you can just as easily check once a week, looking back one month, using Azure AD and Microsoft Teams. 

Here’s how it works:

  1. Torq will pull all active accounts with a user type of “contractor”, then filter to show the ones with no logins in the past seven days.
  2. The Torq chatbot sends the list to a designated Slack channel and asks for approval to suspend. 
  3. If the request is denied, or if the request times out, the process is terminated and the Slack channel is notified. 
  4. If approved, Torq will tell Okta to suspend each account, then log successes and/or failures in case the information is needed for audit later on. 
  5. Once the process is complete for each inactive account, a final update is sent to the Slack channel to notify the team.

Torq workflow for automatically suspending inactive contractors

This is a good example of how pre-built templates in Torq can help automate tedious-but-critical tasks like suspending users. It’s quick and easy to set up, and includes some powerful variables to help you tailor the workflow to your policies.

Get the workflow template

Already a Torq customer? You can find this workflow and dozens more in Torq’s template library. Just add it to your Torq account, and then connect your identity provider and chat app.

Get Started Today

Not using Torq yet? Get in touch for a trial account and see how the no-code security automation platform unifies your security, infrastructure, and collaboration tools to create a stronger security posture.

Automated Developer-First Security: Our Partnership with Snyk

Today’s developers move at increasingly rapid speed – making it more critical than ever to identify and resolve code vulnerabilities early in the software development lifecycle.  By tackling security early – instead of waiting until testing and deployment – engineering teams can reduce unnecessary patching and maintenance cycles, reduce risks, and ensure timely delivery of new features.

Many of our customers rely on Snyk’s developer first security to keep their applications, dependencies and infrastructure-as-code free from vulnerabilities.  Snyk’s integration into the tools developers use to write and deliver code ensures security issues are caught and remediated as early as possible.  

We’re excited to announce our partnership with Snyk as a member of their TAPP initiatives. As a Snyk TAPP member, we are able to build, integrate, and go-to-market as quickly as possible with new solutions that address the most pressing security challenges we face with modern application development and technologies.  

Torq’s no-code automation extends the power of Snyk to any combination of security and collaboration tools in the enterprise.  Developer and security teams alike benefit from automated Torq workflows that can be deployed in a few clicks from our hundreds of templates, or created with a drag and drop workflow builder.  These workflows help ensure that Snyk’s findings are triaged, assigned and remediated – no matter the speed  or scale of application delivery. 

Orchestrate Application Security at Speed and Scale

When Snyk detects new vulnerabilities, tracking this in a ticketing system like Jira is critical to ensuring teams have the knowledge and visibility to remediate the issues.  But it’s easy for tickets to become overwhelming, especially at the pace of modern engineering and DevOps teams. Without effective prioritization and escalation – it’s difficult to know what to fix first – leaving your applications at risk.

Connecting Snyk and Torq will solve this problem, by orchestrating ongoing triage, prioritization, and escalation workflows. This will keep ticket owners up-to-date on the latest critical and high severity issues Snyk will detect and escalate unresolved tickets after a set time period to make sure that vulnerabilities are addressed

How it works

Torq’s template library contains hundreds of templates for almost any security process.  With just a few clicks, users can import templates into their Torq environment, then easily connect the workflow to their own tools, or make customizations as needed.   Below is an example of a template that uses Snyk, Jira and Slack.  

To get started, simply provide a Snyk API key to Torq, and connect your Jira and Slack instances.  Then add the template from Torq’s template library.  This will give you a workflow that does the following on a daily basis:

  1. Identifies all projects in Snyk that have unresolved issues with severity Critical or High
  2. For each project, verifies that there is a Jira ticket open and assigned. If no Jira ticket is found, one is automatically created and assigned to the Snyk project owner.  Notifications for new tickets are then sent to owners using Slack
  3. For any tickets open longer than 48 hours, a Slack message is sent to the security team.  This message contains two buttons – one to remind the ticket owner, and one to escalate the issue. The recipients and time period are fully customizable – and can be changed in just a few clicks.
    1. If escalation is chosen, a Slack message is sent to the owner’s manager or another specified escalation point.
    2. If a reminder is chosen, a Slack message is sent to the ticket owner.

This process ensures that high and critical vulnerabilities are kept visible to code owners, engineering managers, and security teams – so fixes can be prioritized and delivered.  By automating this process, manual work of reviewing Jira tickets, matching Jira tickets to Snyk issues, and sending reminders or escalations is eliminated.    

4 Database Access-Control Methods to Automate

This post was previously published on The New Stack

Regardless of which role a person has in an organization, they will always need access to one or more databases to be able to perform the functions of their job. Whether that person is a cashier at McDonald’s or a technical account manager supporting a Fortune 500 company, data entry and retrieval is core to the services they provide. 

In this article, we will explore some of the benefits that automation brings to an organization’s data security. We will explain how introducing automation into existing database access-control methods can increase efficiency and consistency, and we will also discuss how security-focused automation adds extra layers of protection, like improved data integrity and privacy controls, that help your business stay secure.

Removing Direct Access to Databases

Before modern technologies, all client information was readily available to everyone in the office in a nearby filing cabinet. Later, that same concept was transferred to electronic databases where everyone looks up everything in “the system.” 

This model is arguably easier to build, but it’s not scalable since all the data in each system has to be available to all employees — all of the time. It also increases the amount of manual cross-checking that people need to do between systems. And, don’t forget the risk of data drift as well as the heightened risk of a data leakage.

There are many benefits to automating data access between the people who ask for it and the actual databases themselves. Automated workflows can create a full view and flow of your data by pulling the requested pieces of information from their sources of truth, automatically. 

For example, when you pull an employee profile from an automated system, contact information comes from the HR system, information about currently assigned projects comes from a tool like Jira and the list of corporate assets that the employee has signed out is pulled from a tool like Service Now.

In addition, automated database access-control methods can reduce duplicate data entry, which can in turn reduce errors and drift. In the aforementioned employee profile, for example, the contact information always comes from the HR system, so the payroll system doesn’t need to have its own copy, nor does the helpdesk solution.

The Principle of Least Privilege

Adding a proxy between people and data by using automated workflows also allows you to embed security best practices and other controls. The principle of least privilege is at the core of these data access controls. 

For example, if someone is in a certain sales group, the automated solution can filter out all data that isn’t relevant to their needs. The same goes for people who pick orders in the warehouse; they don’t need to see how much every item costs or which credit cards are being used. You can make this as fine-grained as you want, but it requires that you put data access controls in place to support the safeguards.

A second approach that some organizations take is to log everything and audit it against what people are supposed to be doing rather than block access to the areas that people don’t need to access. This is technically easier to build, but it requires more people to run.

Data Access Approval Requests

The beauty of using security automation as a data broker is that it has the ability to validate data-retrieval requests. This includes verifying that the requestor actually has permission to see the data being requested. 

If the proper permissions aren’t in place, the user can submit a request to be added to a specific role through the normal request channels, which is typically the way to go. With automated data access control, this request could be generated and sent within the solution to streamline the process. 

This also allows additional context-specific information to be included in the data-access request automatically. For example, if someone requests data that they do not have access to within their role, the solution can be configured to look up the database owner, populate an access request and send it to the owner of the data, who can then approve one-time access or grant access for a certain period of time. A common scenario where this is useful is when an employee goes on vacation and someone new is helping with their clients’ needs while they are out.  

Audit Trails

As we mentioned above, some organizations might opt to log everything to track who is doing what. Any good data security automation solution will have the capability of creating extensive audit logs. This audit capability can – and should – be used to track both positive and negative events. A positive event would be like granting Fen permission to see the data that she is requesting, while a negative event would be like refusing Vijay access to the data of a patient who is seen at a different branch of the clinic.

Both types of events can be mined for trends. Every time Netflix alerts you that you’ve logged in from a new location, for example, it’s because its solution logged a positive authentication event and the backend solution then did something with that event when it arrived.

Automated Data Access Workflows

As we outlined above, incorporating secure data-access workflows that are run within automation frameworks into your existing business processes improves the integrity of the data being moved and ensures better privacy controls by showing only the data that is required. It also exposes more metrics, which can be tracked to find more areas that can be optimized and more places where additional automation might add more value. Companies like Torq can help organizations introduce data security automation into their infrastructure. Torq’s solutions are designed to address common scenarios as well as high-value use cases.

How to Automate Intune Device Reports with Torq

Whether for managing remote teams, supporting ‘bring your own device’ (BYOD) policies, or simply another layer in a data protection strategy, services like Microsoft Intune offer greater control over the devices on your network. But using the data from these services often requires tedious prep work, and this process is likely repeated multiple times a week, if not daily. 

Tedious, repetitive, structured: these are all signs that a process can and should be automated. Torq offers dozens of pre-built templates to help security teams add efficiency to processes like these. Here we’ll show a workflow that automatically generates a daily report on device compliance from Intune, and delivers it to Slack. 

How Torq can automate device compliance reports  

The default trigger for this workflow is set to run once a day, but you can customize the duration based on your needs. Similarly, the default chat application is Slack, but changing to Microsoft Teams or other apps takes just a few clicks. 

Here’s how it works:

  1. Torq will generate an access token and pull the list of devices from Intune, then filter for the ones that are tagged as non-compliant. 
  2. It will loop through each of those devices to look for a registered user, then split the list based on whether or not a user is found.
  3. Next it generates the actual report, which is built from a set of pre-defined messages that you can customize.
  4. Finally, the last step is to send everything to a designated Slack channel. 

A segment of the workflow template available in Torq

This is a good example of how a relatively simple, pre-built template can make a big impact on recurring security activities. With just a few minutes of setup, you can eliminate hours of tedious work and improve your compliance efforts. 

Get the workflow template

Already a Torq customer? You can find this workflow—Generate report on non-compliance devices (Intune)—and many more in the template library. Just add it to your Torq account, provide your Microsoft credentials, specify the report frequency, and enjoy. 

Or, check out some of the other device management templates like Provide temporary device admin rights for Mac users, Rename new mobile device to ‘User–Serial Number’, or  Add/Remove Azure AD users from global lists.

Get Started Today

Not using Torq yet? Get in touch for a trial and see how our no-code automation platform can add efficiency to your operations and improve overall security posture.

Automated Threat Hunting: A Closer Look

This post was previously published on The New Stack

Proactively finding and eliminating advanced threats through threat hunting is a growing necessity for many organizations, yet few have enough resources or skilled employees to do it effectively. For those who do have an active threat hunting program, the process is often manual and time consuming. 

With cloud security automation, however, you can implement rules that automatically adjust your security policies based on the latest threat data. As a result, you can achieve automated threat hunting, which helps you perform automated, expert-level threat hunting at machine speeds.

When you employ security automation technologies, you eliminate two major roadblocks to efficient threat hunting: a lack of in-house cybersecurity experience and the inability to apply threat intelligence reports from outside sources to your environment. Other advantages of automating threat hunting include decreasing a potential threat’s “exposure window,” handling multiple threat-hunting sessions simultaneously and implementing uniformly effective threat hunting procedures.

Automating threat hunting can also help cloud and cloud-native enterprises speed up their network security processes, lower operating costs and improve their ability to respond quickly to advanced cybersecurity threats. This article delves deeper into the threat hunting use cases discussed in a previous Torq blog post, Threat Hunting Like a Pro — With Automation.

Automate EDR, XDR, SIEM and Other Queries

To kick-start security automation in threat hunting, your first steps should include investing in automation tools such as extended detection and response (XDR), security information and event management (SIEM), endpoint detection and response (EDR) and anomaly detection platforms. These tools are traditionally manual, but with automation tools like Torq, they can be configured with threat detection rules and alerts to kick off distributed search efforts and reach conclusions whenever a new exploit technique is discovered. This integration brings all cybersecurity platforms into a single pane of glass, which could help you streamline the process of responding to these alerts.

SIEMs, EDRs, XDRs and other threat hunting tools are used for real-time security event analysis to help with investigation, early threat detection and incident response. They also provide you with comprehensive alert information, which helps you monitor, detect and respond to potential attacks on the threat hunting portal emanating from endpoints, cloud workloads, networks, emails and identity management systems. For instance, Torq workflows can be triggered by events from existing security systems, such as SIEM alert rules, EDR/XDR detection alerts and anomaly detection alerts. Information and anomalies from each system can be correlated and analyzed to identify potentially malicious activity and instances of compromise.

Share Threat Hunting Templates with Your Team Members

Every SOC team uses custom templates, which are shared with team members to ensure the most efficient threat hunting workflows. These threat hunting templates serve as playbooks for automating investigations received from the SIEM/EDR/XDR queries discussed above. All of the signals and alerts generated are grouped by detection types and listed with their relevant denotation scores and associated context. Once the alerts have been contextualized, team members single the groups out for in-depth investigation according to the workflow templates.

When you use Torq, all threat alert queries with suspicious files are detonated in a sandbox for investigation. Once the detonation is complete, the findings are investigated to determine if the files are malicious.

Trigger Search Processes With Workflows

The flows can activate search processes across various systems to identify further events and evidence. This helps reduce the amount of manual investigation and decision-making during tense periods. Examples of such searches include EDR/MDM searches, SIEM/logs store searches and email/storage searches. You can also perform additional investigations, enrich case management systems and initiate remedies for each finding.

Use Playbooks for Automated Incident Response

After a potential alarm has been found, one of the most important tasks in threat hunting is incident response. Playbooks serve as manuals for procedures and threat analysis when responding to threats automatically. During ad-hoc investigations, threat hunting playbooks are launched on-demand to show teams the next steps in blocking, containing or remediating threats.

Trigger Remediation

Upon discovering a threat, a remediation trigger is promoted to your SOC team for remediation workflows. At this stage, the team is assumed to have a thorough grasp of the danger and possible consequences of the threat based on the detected signs of compromise. Threat remediation aims to precisely remove risks while reducing organizational damage and optimizing security effectiveness.

The threat hunter’s remediation technique is determined by the sophistication of the hunter and the attack. Basic remediation procedures may be useful in removing the threat in some circumstances. Advanced attackers, on the other hand, can detect and bypass these actions, necessitating more thorough countermeasures. Killing processes, forcing a computer to reboot and restoring from a backup are all examples of basic remediation tactics.

The cyber threat landscape is evolving, and new threats (such as fileless malware) are being developed with the explicit intention of evading existing threat hunting tactics. Multi-stage methods of subtly investigating the initial threat vector, monitoring the state of the affected systems and surgically eliminating the malicious code within the system are some of the more sophisticated threat remediation strategies.

Torq, for example, remediates threats by first quarantining the corrupted file with EDR, then safely deleting the file from cloud storage, quarantining it in the mailbox and adding it to EDR engines in case of future detection.

Giving Security Professionals an Edge

Without automation, threat hunting is impractical for most organizations. This is because automated threat hunting gives security professionals the edge and the tools they need to stay ahead of the increasing number of sophisticated security threats and protect the network from cyberattacks.