Aytalina Azarova’s Post

View profile for Aytalina Azarova, graphic

Cyber Security Consultant, PhD, CPSA, Security+, Network+

I’ve recently installed Splunk Enterprise in my home lab and was playing with it over the last couple of weeks. As I’ve spent the last decade analysing public health data, I couldn’t help noticing a divergence in their treatment of outliers and handling of false positives. In Splunk, outliers are often the “crown jewels” of insights, potentially indicating critical threats or anomalies within a system, cybersecurity breaches, operational inefficiencies, or even emerging trends in customer behauvior. Splunk's algorithms are finely tuned to detect outliers, given their significance in uncovering security risks. In scientific research, we treat outliers with caution and skepticism, and seek reasons behind them - whether they stem from measurement or selection errors, participant variability, etc. Once identified, outliers are typically removed from the analysis to ensure the integrity and reliability. Another notable difference lies in the treatment of false positives. In Splunk, where the stakes can be high in terms of security breaches or operational disruptions, minimizing false positives is paramount. Certain algorithms and tuning mechanisms are employed to reduce the occurrence of false alarms, ensuring that alerts are actionable and reliable. In clinical research, false positives are useful: together with the rate of ‘false negatives’,  they reveal the ‘sharpness’ of the testing tool or procedure. Let me know in comments if you’ve noticed any other differences between scanning analysis and clinical research. #DataAnalytics #Splunk #ClinicalResearch #DataIntegrity #FalsePositives #Outliers

  • No alternative text description for this image
John Harbord

Writing Advisor at Maastricht University

1mo

Congrats on completing this course!

Like
Reply

To view or add a comment, sign in

Explore topics