Saturday, January 28, 2006

Squealing False Positives in SNORT

The vulnerability that has been reported against SNORT is quite serious. The issue of false positives has been a big issue in the intrusion detection arena from the early systems. However, the issue that is addressed in this paper is specific to packet-based network monitoring systems as the actual vulnerability is about manipulating network packets. The report says “The limitation is not the ability to accurately detect misuse behavior but rather the ability to suppress false alarms”. This is a major issue in signature based intrusion detection systems. The reason for this is the approach taken by most these tools on pattern matching and SNORT does the same. When the attacker decides to generate malicious packets at their own will then the systems capabilities are not sufficient to distinguish the difference between the real and the fake one. How these could be over come is still a question. Because SNORT is a simple tool by design implementing knowledge on how to discriminate real and fake packets will become and overhead in it’s implementation. However, this issue should be addressed by commercial IDS that are also based on packet based network monitoring as customers will not be happy to get bogged by false alarms after spending thousands of dollars for their NIDS

Friday, January 27, 2006

Gates vs Jobs

This article from wired.com critiques Steve Jobs (Apple CEO) for not being inclined towards charity and not spending enough on charity, where as Bill Gates (MS Founder) is known for a lot of those. However, in my view isn't that something that is personal to Jobs or Gates? They worked so hard to get where they are and they were smart enough to do so. The millions they earned were rewards for their hard work and spending it on charity is totally upto them. Having said that, I am not saying that you should not make charity donations. I think not all rich people make public charity donations and make that as a propaganda for their image as bill gates does. This is not necessary. If you are giving charity then just give it. Why make it a big deal? There is a nice proverb in my mother language that goes "The right hand should not be aware of What your left gives". I think , for me, this makes total sense. What do you folks think?

Friday, January 20, 2006

IBM Extreme Blue pics

Here are some photos from my IBM Extreme Blue experience. It was a Great Experience!! During the internship we went for White water rafting as an Extreme Team building exercise. Most of the other pictures are from the IBM Extreme Blue Expo that was held at their headquaters in Armonk , New York.

Here is an excerpt from the article that appeared after the Expo on Ottawa Business Journal

News Story
Students return triumphant from extreme test
By Ottawa Business Journal Staff
Wed, Aug 24, 2005 2:00 PM EST




Dominique Simoneau-Ritchie, Vaheesan Selvarajah, Bassem Bohsali and Dmitry Karasik. (Darren Brown, OBJ)

They went, they presented, and, by all reports, they conquered. Four university students, two of whom are Ottawa natives, returned from IBM headquarters in New York City last week armed with the knowledge that they can run with the big dogs after presenting an original idea in front of 50 of the company's top brass.

Dmitry Karasik and Vaheesan Selvarajah, both Carleton University students, and Bassem Bohsali and Dominique Simoneau-Ritchie of Montreal formed one of 25 teams from across North America charged with taking a concept given to them by IBM and producing a marketable product in only 14 weeks. The challenge is part of IBM's Extreme Blue internship program. more

Wednesday, January 18, 2006

Human immune system and computer security

Today's discussion was on an exciting topic. It was all about how an analogy could be drawn from human immune system to computer systems in the interest of identifying security. I was so thirilled to know the nitty gritty details that goes way beyond than i thought. Actually .. as soon as i heard these i thought i should take up a biology course. It was so insightful. proteins, amino acids , peptides t-cells, etc It fascinating to see how the human immune system protects them by constantly checking every cell. This drove the direction of my professor's research. The idea of defining a self to a computer system and the nevel finding was the use of sequence of system calls. It was like the peptides being checked by the immune system with MHC. ( now i may be wrong in some cases.. so better read the paper on this) but trust me..it was so intersting.. here is the link for the paper.. Forrest, Hofmeyr, & Somayaji (1997), Computer Immunology

Here is what i thought about the paper..
The approach to ID in this paper is very compelling. The detection model by defining the self of a computer is still debatable. But the choice to use a sequence of system calls seems like a good choice because this is what any executing program is all about. However, by not considering the parameters to the system calls may still fail to detect buffer overflow attacks in an early stage. The problem of autoimmune systems can also be a problem for IDS and that's when we get false-positives. But the suggestion of a multilayered approach may solve the issue as it is solved in human immune system. An important fact that is discussed in this paper is the fact that for an IDS to be effective it has to be distributed and parallel. The idea of IDS being parallel is something newly introduced by this analogy to human immune system. Also the fact that "one size does not fit all" is identified through this analogy. The recommendation for a customized version of the IDS is big different from previous models which considered a generalized approach. Because the model uses simple attributes that are compact and universal to any computer system, this research has got the possibility for achieving a nearly real-time response to ID. However, the two phase approach of the design requiring a learning time for the system prior to deployment in production may have some limitations to this approach. Because the question could be "when is it good to stop learning?". The second approach taken in this paper, distributed change detection has got lots of complexities into it. The questions raised upon the design of these techniques define the core of this approach and has a big impact on the success of this approach as they are the ones that will detect novel abnormal activities.

Wednesday, January 11, 2006

How far have we progressed in IDS ?

I am taking this course on Intrusion detection this term and we are discussing some popular papers in this topic a its a seminar course. So I thought i'll share some of the dicussion with the world as well...

Intrustion Detection is a very young research area. Early traces of research point to interesting research done by Anderson (1980), Computer Security Threat Monitoring and Surveillance and Denning (1986), An Intrusion Detection Model

It's important for us to note that eventhough the technology was old and the model that was proposed is not so much suitable for present day situation the questions that they have tried to address still remains open.

These two papers discuss pretty much the same security requirements for an IDS. Primarily they are concerned about both external and internal attacks. However they both rely on audit trails or audit records for any intrusion detection. The models discussed in the papers rely on the fact that any abnormal behavior exhibited in the system access or resource usage will manifest in audit data and the source of the threats will also be able to be identified, even though this is not always true. Both authors were aware of possible attacks that may arise from internal and external users of the system and believes that running through the audit trails or records searching for abnormal system behavior or usage pattern would reveal any attempt for intrusion. When it comes to analyzing the audit records, both papers discuss models that are based on statistical metrics and measures. The approach taken to understand the audit records is by characterizing the computer usage by means of user ids, file access and program execution.

The computing infrastructure that has been considered in these intrusion detection models is primarily for batch and time sharing systems. The models discussed will suit the need for centralized systems that have a limited number of system users and resources. The scalability of those systems has not been considered in this IDS architecture. Present day systems are typically distributed in nature and has great potential to grow rapidly in terms of users and resources. The growth in they PC technology has made to look for intrusion detection systems that are simple and deployable on almost all the present day PCs. The goals that are discussed in these two papers does not really cater to present day needs. However the approach taken towards intrusion detection is can be very well applied to present requirements with little modifications. When we look are the current server based technologies and the proliferation of internet based software systems the assumption that are made in these two papers become invalid. The open and ever growing Internet architecture imposes increasing requirements on to any system in terms of security and intrusion detection. Even though the technology has improved a lot and techniques for intrusion detection has been improved significantly implementing profile based or signature based systems are less efficient as they become obsolete soon. Building profile templates to monitor new users or resource is no more trivial with the internet based user base for applications. Further, defining rules for anomaly detection is even harder as the systems become more heterogeneous their interoperability is mandatory. Feature interaction among different types of systems is hard to trace when they differ in platform and implementations.


The models that were proposed are good generalizations in to some level and they have tried to answer " Could this be a solution to IDS ? " . However the lesson to be taken out from these papers is that some of the questions like ..

* Soundness of Approach -- Does the approach actually detect intrusions? Is it possible to distinguish anomalies related to intrusions from those related to other factors?

* Completeness of Approach -- Does the approach detect most, if not all, intrusions, or is a significant proportion of intrusions undetectable by this method?

* Timeliness of Approach -- Can we detectmost intrusions before significant damage is done?

* Choice of Metrics, Statistical Models, and Profiles -- What metrics, models, and profiles provide the best discriminating power? Which are cost-effective? What are the relationships between certain types of anomalies and different methods of intrusion?

* System Design -- How should a system based on the model be designed and implemented?

* Feedback -- What effect should detection of an intrusion have on the target system? Should IDES automatically direct the system to take certain actions?

* Social Implications -- How will an intrusion detection system affect the user community it monitors? Will it deter intrusions? Will the users feel their data are better protected? Will it be regarded as a step towards ‘big brother’? Will its capabilities be misused to that end?


remain open to present day IDS. So since 1980 how far have we progressed in terms of Intrusion Detection?

Thursday, January 05, 2006

After a long time..

I am back again after long silence.... I was bogged down by some busy school work during the past and was not able to keep up todate with my blog. However, I thought i should try me best to keep do it often... so there may be a little silence once in a while but i will come back :) Lots of intersting things have happened during these days and i will try to put them one by one...