Splunk is a good tool for indexing and searching logs.

Splunk uses SPL Splunk Processing language for querying

Splunk common ports are as below

Image Title
Image Title

To ignore certain errors so as to prevent from being alerted for false positives

index=prodapplications sourcetype="someservice" level=error message!="*Error processing" | table dateTime,LEVEL,logger,message,exception

Search the access logs, and return the number of hits from the top 100 values of “referer_domain”.

sourcetype=access_combined | top limit=100 referer_domain | stats sum(count)

** Graph the average “thruput” of hosts over time**

... | timechart span=5m avg(thruput) by host

Splunk regex are very powerful and We can use regex and rex

**Keep only search results whose _raw field contains IP addresses in the non-routable class A (**

... | regex _raw="(?<!\d)10.\d{1,3}\.\d{1,3}\.\d{1,3}(?!\d)"

... | rex syntax

| rex field=_raw "ID\:(?<ID>[0-9]+)"


Return the first 20 results. 
... | head 20
Reverse the order of a result set
... | reverse
Sort results by "ip" in ascending order, "url" in descending order.
... | sort ip, ‐url
Return the last 20 results (in reverse order)
... | tail 20

Inputs.conf is located in /etc/default/

This used by the forwarders for forwarding the log info from the client machine to the indexer to be indexed.


sourcetype = access_common

** Webavailability **

After installing the web availability splunk module

index=_internal sourcetype=web_availability_modular_input INFO OR WARNING OR ERROR OR CRITICAL | rex field=_raw "(?<severity>(DEBUG)|(ERROR)|(WARNING)|(INFO)|(CRITICAL)) (?<message>.*)" | fillnull severity value="UNDEFINED" | timechart count(severity) as count by severity

<query>index=_internal sourcetype=web_availability_modular_input | rex field=_raw "(?<severity>(DEBUG)|(ERROR)|(WARNING)|(INFO)|(CRITICAL)) (?<message>.*)" | fillnull value="undefined" vendor_severity | stats sparkline count by severity | sort -count</query>

<query>index=_internal sourcetype=web_availability_modular_input $severity$ | rex field=_raw "(?<severity>(DEBUG)|(ERROR)|(WARNING)|(INFO)|(CRITICAL)) (?<message>.*)" | sort -_time | eval time=_time | convert ctime(time) | table time severity message</query>

<searchString>sourcetype="web_ping" $only_enabled$ | fillnull response_code value="Connection failed" | eval response_code=if(response_code="", "Connection failed", response_code) | eval response_code=if(timed_out == "True", "Connection timed out", response_code) | stats sparkline(avg(total_time)) as sparkline_response_time avg(total_time) as avg_response_time max(total_time) as max_response_time latest(response_code) as response_code latest(_time) as last_checked latest(title) as title latest(total_time) as response_time range(total_time) as range min(total_time) as min by url | eval response_time=round(response_time, 0)." ms" | eval average=round(avg_response_time, 0)." ms" | eval maximum=round(max_response_time, 0)." ms" | eval range=round(min, 0)." - ".round(min+range, 0)." ms" | table title url response_code last_checked response_time average range sparkline_response_time  | `timesince(last_checked,last_checked)` | sort -response_time</searchString>
index=wc_prod_filemon (source=powershell:// wc_filemon_fileportalworking createtime_diffmins>5 filename=*.tif) OR (source="powershell://wc_filemon_faxes")  | append [search index=wc_prod_filemon source="powershell://wc_filemon_confirmation"  | eval createtime=strftime(relative_time(strptime(createtime,"%Y-%m-%d %H:%M:%S"),"-11h"),"%Y-%m-%d %H:%M:%S")  | eval createtime_diffmins=round(((_time-strptime(createtime,"%Y-%m-%d %H:%M:%S"))/60),0)  | search createtime_diffmins>10]

| stats count(eval(match(source,"powershell://wc_filemon_confirmation"))) as confirmation_count,

count(eval(match(source,"powershell://wc_filemon_faxes"))) as fax_count,

count(eval(match(source,"powershell://wc_filemon_fileportalworking"))) as fileportal_count

| eval bool=if(fileportal_count>100 OR fax_count=0 OR confirmation_count>15,1,0) | rangemap field=bool low=0-0 severe=1-1 default=none

** Service status **

index=wc_prod_svc | rename servicename AS "Service Name", displayname AS "Display Name", status AS "Status" | table "Service Name","Status" | sort -"Status",+"Service Name"

** Eval with cutofftime **

index=comhr_prod_filemon source=powershell://hr_filemon_scanbatch createtime_diffmins>5) OR (index=hr_prod_web source="web_ping://comhr_web_webtop" sourcetype="web_ping" response_code>=400) | append [search index=hr_prod_filemon source="powershell://hr_filemon_refdatainput" | eval cutofftime=(strftime(now(), "%Y-%m-%d")." 09:30:00") | eval result=if(_time>strptime(cutofftime,"%Y-%m-%d %H:%M:%S"),1,0) | search result=1] | stats count(eval(match(source,"powershell://hr_filemon_scanbatch"))) as scanbatch_count, count(eval(match(source,"powershell://hr_filemon_refdatainput"))) as refdatainput_count, count(eval(match(source,"web_ping://hr_web_webtop"))) as weberror_count | eval bool=if(scanbatch_count>100 OR refdatainput_count>0 OR weberror_count>0,1,0) | rangemap field=bool low=0-0 severe=1-1 default=none

These writings represent my own personal views alone.
Licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.