Log management is usually – and with good reason – implemented in order to analyze network security events for detecting intrusions and forensic purposes. That is – to see what happened only AFTER a cyber attack has taken place. Granted, it is pretty hard to detect cyber attacks, but we’ve moved away from the traditional cyber security requirements onto auditing and compliance as the reason behind employing cybersecurity measures, together with slowly accepting the importance of logging systems and application management. The purpose of introducing logging into an IT network notwithstanding, the process itself has 10 distinct steps we have prepared here for you to get acquainted with in order to help you protect your company from cybersecurity predators. So, let’s get to it!
1) Policy definition
Policy definition requires responsible persons to determine what they are going to audit and get alerts on. Is your company interested in security event detection, operations and application management or compliance auditing? Will you be auditing just workstations and servers or also applications and network devices? Depending on your company needs, you will need to be able to define a policy in terms of which you will be working from then on.
After deciding on what and why you want to audit within your IT network, you then need to detail what log events will help you achieve those goals. You may be offered suites or packages that attempt to provide predefined, built-in configurations to support various goals, but that may not match your particular needs. Each user will have to review what the product they are buying is able to capture and alert on, then define additional capturing to fit the demands of their environment. Configuration is the act of translating your audit policies into actionable information capture.
Data collection involves sending log message events from clients to the log management server. In most cases, you will be able to achieve agentless data collection or require that client events be forwarded to the server. Most log management products provide agent software to assist with data collection in cases where agentless collection doesn’t make sense.
Collected data is often parsed and separated into individual data fields as it enters the data stream. Parsed or structured data is typically easier to index, retrieve and report on. Unparsed data – also known as raw or unstructured – can normally be collected, but isn’t as easy to index, retrieve or report on. Often administrators will have to create their own parsing or treat unstructured data as a single data field, as well as conduct keyword searches to retrieve information.
Normalization is the process of resolving different representations of the same types of data into a similar format in a common database. In a log management database, this may involve synchronizing reported event time to a common time format, for example, local time to CUT. It may mean resolving IP addresses to hostnames and anything else that attempts to make disparate information more similar. The more parsed and normalized data you have, the better.
In order to optimize data retrieval for search queries, filters and reporting, data needs to be indexed as it is stored. Indexing takes parsed data, but bear in mind that unstructured data has shorter retrieval time.
Captured data needs to be stored to medium-term or long-term storage. All products save to local hard drives, and some can store to external storage arrays, such as SAN, NAS, and so on. All the products tested allow event messages to be exported for long-term storage and later retrieved if needed. If you’re concerned about the legal chain of custody requirements, make sure the solution you’re evaluating cryptographically signs all stored messages.
This is the process of taking different events from the same or different sources and recognizing a singular event. For example, some log management products have the ability to recognize a packet flood or password guessing attack, versus simply reporting multiple dropped packets or failed logons. Correlation reflects product intelligence. Log management products that excel at correlation are known as Security Information and Event Managers (SIEM).
Note that, in order for centralized log management to work well, it’s very important that all incoming log information have accurate timestamps. Make sure that all monitored clients have the correct time and time zone. This will help in reporting, forensic analysis and legal purposes.
Baselining is the process of defining what is normal in a particular environment so that alerting is done on only aberrant patterns and events. For instance, every network environment has multiple failed logons during the day. How many failed logons are normal? How many failed logons in a particular time period should be reported as suspicious? Some log management products will listen to incoming message traffic and help set alerts when levels have exceeded certain thresholds. However, if the product doesn’t do it, it has to be done manually.
When a critical security or operation event happens, it’s important that a response team get notified. Most products support email and SNMP alerting, and others support paging, SMS, network broadcast messages, and Syslog forwards. A few products interface with popular help desk products so that a service ticket can automatically be generated and routed.
It’s also crucial for alerting thresholds to suppress multiple, continuous alerts from happening from a single causative event – which is supported in most products. For example, you don’t want to be alerted 1,000 times of a single, continuing port scan across multiple ports. One alert should be enough to get the response team moving.
Reporting on all collected events allows long-term baselining and metrics to be accomplished. Critical events should be included in reports and alerted. Reporting allows technical teams to pinpoint problems and management to gauge compliance efforts.
When choosing a log management solution, you’ll want to evaluate the product features and capabilities with the whole process in mind. And now you have the know how.