# Eigen vectors and Eigen values

A point x in a two-dimensional space represents a vector because it has a magnitude and a direction with respect to the center (0, 0). A scalar multiplication of x represents another vector which lies on the same line (elongated or scaled down) as that of vector x.  When we multiply vector x with a matrix A, it again results in a vector but now the resultant vector will be either in the same previous direction as that of x or in a new direction. Also, the resultant vector will get either scaled up or down. If the resultant vector lies in the same direction then we say vector x is Eigen vector of matrix A, otherwise, it is not an Eigen vector. A 96 seconds youtube video explains the same concept visually.

Corresponding to Eigen vector, we too get a scalar value ($\lambda$) which on multiplying vector x results in the same vector as that obtained by above matrix multiplication. Mathematically,

$A*x = \lambda*x$

Here, $x$ refers to Eigen Vector and $\lambda$ refers  to Eigen value.

References:

2. http://blog.stata.com/2011/03/09/understanding-matrices-intuitively-part-2/

# Installing and configuring RaZberry

Follow these steps to install RaZberry:

1. Format memory card using SDFormatter
2. Download OS for Raspberry and burn OS image on memory card by  using Etcher software
3. Enable display as mentioned in this post.
4. Connect your Raspberry to Monitor and enable following in the Raspberry configuration: VNC, SSH
5. Install screen: This will help in running scripts in the background even when ssh session disconnects
6. Install Z-wave using instructions mentioned on Z-wave website

# Enable display in Raspberry Pi

Do you want to connect your Raspberry Pi to an external display via HDMI? If yes, follow these simple steps:

1. Install operating system on memory card
2. Open config.txt in the memory card
3. Search following lines and uncomment these lines:
4. hdmi_safe=1
5. hdmi_force_hotplug=1
6. Now save the file and insert memory card in Raspberry Pi Module
7. Insert one end of HDMI cable in the Raspberry Pi and other side in the monitor.
8. Now you should be able to get GUI of Raspberry on the monitor.

# A peek into IIT Bombay

I was fortunate to visit IIT Bombay for a duration of three months, from October to December 2016.  I became part of a well-established research lab, SEIL, which works in energy sustainability. The team members are dynamic, diverse consisting of PhDs, MTechs, research staff and one driver (Lab Head). Also, we had a CZAR, but a caring one. Some members have expertise in an embedded side and some are great in data churning and few are all-rounders. Overall, the team is a mix of nerds, enthusiasts and of course procrastinators!

First two weeks were terrible for me, where I felt like an alien on campus. There were various reasons for this felling – i)  Everyone was engrossed in their red velvet cubicles either tweaking smart meter, playing with sensors, soldering boards or doing some incompressible stuff, which only they can decode. ii) administrative related issues, iii) and Mumbai residential flat rules. But all thanks to weekly “group meetings” and “Smart ICT” classes, which broke the frozen state and I felt like at my home institute. The best part of the class was that almost every week an eminent figure used to come, deliver his best and made us crazy for one and half hour. I rate this course as my best course I have ever attended, in which the instructor used to deliver lectures in the form of stories and made us awestruck. Remaining ten weeks were enjoying during which work went smoothly.

Well, I stayed outside the campus, but I survived for the first month with the food of Phoenix (H10), second month with Woodland (H8) and the last month with Queen of the campus, the enlightened abode (H1).

Some NP questions which I was never able to solve include:

1. How KReSIT got the best building architecture award! Once one of my visitors even commented that whole IIIT-D can get accommodated inside KReSIT!
2. Why I got embarrassed with the technical fest of IITB and I felt bad in Mood Indigo, Asia’s biggest cultural fest.
3. Where were my lab mates during Mood Indigo!
4. If Saraswat bank has branches in America, then why Canara bank ATM failed to dispense money to a 90 minutes clowner during demonetisation!

The things which I am going to miss include:

1. The playground: where the maintainers make it ready for every type of game and you feel fully occupied with the  7 – 9 long symmetrical, colored running tracks. These tracks are used by both fast runners and crawling thinkers. Also, you will find a life outside hectic schedules and people doing some unknown exercise which you do not find even on youtube
2. The Central Library: A well-lit whitish building from both inside and outside. A calm place where you find students either in deep studies or watching videos to shun frustration.
3. The queen of the campus:  A place where you relax and watch 20 minutes of TV after regular meals.

Finally, three months stay finished and it was my last working day on campus. On this day, I had ten minutes of wisdom sermon from my guide;  then all lab members assembled and had a home made Cake (Thanks for the delicious cake). And to my surprise, I was requested to give a speech of above details, which I did.  Oh, I forgot to thank Eduroam facility, otherwise, I might have suffered a lot on campus.

Our IIT Bombay team at one of the lab lunches

Concluding with the statement of my guide, “Once a SEILER, always a SEILER”.

# R scripts to handle dirty data

Filling missing values with NA:  I assume that org_xts in the following code represent a given time-series object in which we need to handle missing readings

</pre>
#org_dfs represents object with missing readings
timerange = seq(start(org_xts),end(org_xts), by = "hour")# assuming original object is hourly sampled
temp = xts(rep(NA,length(timerange)),timerange)
complete_xts = merge(org_xts,temp)[,1]
<pre>

Removing Duplicate values: Here, we will identify duplicate entries on the basis of duplicate time-stamps.

</pre>
# dummy time-series data
timerange = seq(start(org_xts),end(org_xts), by = "hour")# assuming original object is hourly sampled
temp = xts(rep(NA,length(timerange)),timerange)
# identify indexes of duplicate entries
duplicate_enties = which(duplicated(index(temp)))
# data without duplicates
new_temp = temp[-duplicate_entries,]
<pre>

Resample Higher frequency data  to lower frequency: In this function, we will resample the high-frequency data to lower frequency data. Note  that there are some tweaks done according to timezone, currently set to “Asia/Kolkata”

</pre>
resample_data <- function(xts_datap,xminutes) {
library(xts)
#xts_datap: Input timeseries xts data, xminutes: required lower frueqeuncy rate
ds_data = period.apply(xts_datap,INDEX = endpoints(index(xts_datap)-3600*0.5, on = "minutes", k = xminutes ), FUN= mean) # subtracting half hour to align hours
# align data to nearest time boundary
align_data = align.time(ds_data,xminutes*60-3600*0.5) # aligning to x minutes
return(align_data)
}
<pre>

# Illustration of k value effect on outlier score

Continuing with the previous post, here, I will illustrate how outlier scores vary while considering different k values. The context of below figure is already explained in my previous post.

After running the LOF algorithm with following R code lines


library(Rlof) # for applying local outlier factor
library(HighDimOut) # for normalization of lof scores
set.seed(200)
df <- data.frame(x = c( 5, rnorm(2,20,1), rnorm(3,30,1), rnorm(5,40,1), rnorm(9,10,1), rnorm(10,37,1)))
df$y <- c(38, rnorm(2,30,1), rnorm(3,10,1), rnorm(5,40,1), rnorm(9,20,1), rnorm(10,25,1)) #pdf("understandK.pdf", width = 6, height = 6) plot(df$x, df$y, type = "p", ylim = c(min(df$y), max(df$y) + 5), xlab = "x", ylab = "y") text(df$x, df\$y, pos = 3, labels = 1:nrow(df), cex = 0.7)
dev.off()
lofResults <- lof(df, c(2:10), cores = 2)
apply(lofResults, 2, function(x) Func.trans(x,method = "FBOD"))



We get the outlier scores for 30 days on a range of k = [2:10] as follows:

Before explaining results further, I present the distance matrix as below, where each entry shows the distance between days X and Y. Here, X represents row entry and Y represents column entry.

Let us understand how outlier scores get assigned to day 1 on different k’s in the range of 2:10. The neighbours of point 1 in terms of increasing distance are:

Here the first row represents neighbour and the second row represents the distance between point 1 and the corresponding point. While noticing the outlier values of point 1, we find till k = 8, outlier score of point 1 are very high (near to 1). The reason for this is that the density of  k neighbours of point 1 till k = 8 is high as compared to point 1. This results in higher outlier score to point 1. But, when we set k = 9, outlier score of point 1 drops to 0. Let us dig it deep further. The 8th and 9th neighbours of point 1 are points 18 and 17 respectively. The neighbours of point 18 in increasing distance are:

and the neighbours of point 17 are:

Observe carefully, that 8th neighbour of point 1 is point 18, and the 8th neighbour of point 18 is point 19. While checking the neighbours of point 18 we find that all of its 8 neighbours are nearby (in cluster D). This results in higher density for all k neighbours of point 1 till 8th neighbour as all these points are densest as compared to point 1, and hence point 1 with lesser density gets high anomaly score. On the other hand, 9th neighbour of point 1 is point 17 that has 9th neighbour as point 3. On further checking, we find that for all the points which are in cluster D now find their 9th neighbour  either in cluster A or cluster B. This essentially decreases the density of all the considered neighbours of point 1. As a result, now all the points including point 1 and its 9 neighbours have densities in the similar range and hence point 1 gets low outlier score.

I believe that this small example explains how outlier scores vary with different k’s. Interested readers can use the provided R code to understand this example further.

# Intuitive​ meaning of k range in Local Outlier Factor (LOF)

The Local Outlier Factor (LOF) is a well-known outlier detection algorithm. In the previous post, I noted down the steps of LOF and here I will discuss its k parameter.  The k parameter often lands the users of LOF into difficulty, but while looking at the meaning of k parameter and the respective application domain, I find it is easy to select a k range. The authors of LOF suggest to use a range of k values instead of using a selective value. This is because we cannot generalise a particular value of k over various datasets following diverse underlying data distributions. Now, let us understand how to select lower (lwrval) and upper (uprval) values of the k range.

To explain it further, let us consider a simple scenario shown in below figure

This figure shows the energy consumption of some imaginary home for one month (30 days). Each small circle represents energy consumption of a particular day, where a number above the circle shows the corresponding day of the month.  Nearby circles marked within red clusters  (A, B, C, D, E) represent days that follow a similar pattern in energy consumption as compared to remaining days.

To use LOF on such a dataset, we need to set the range of k values instead of a single k value. Note that lwrval and uprval are domain dependent. According to LOF paper, lwrval and uprval are defined as:

• lwrval: This refers to the minimal cluster size which consists of similar behaving points, and we believe this similarity is not due to some random cause. This means that we assume a cluster with a size lower than lwrval represent outliers. For example, if I consider lwrval = 3, then clusters A and B represent outliers because none of the points within these clusters has three more similar points/neighbours. At the same time, points within clusters C, D, and E represent normal points because each of them has three more like neighbours.
• uprval: This suggests to the upper optimal number of points to be similar. In other words, we believe that uprval number of points must be similar in the considered application domain. For example, In the energy domain, I know that at least for 6 days (working days of a week) energy consumption is similar due to the occupancy behaviour. So, I set the uprval = 6. No doubt there can be a cluster with size greater than uprval, but our reasoning on a specific dataset motivates us for some optimal uprval. Consider an another example where we assume that occupants of a home change on a weekly basis – say there were 5,  10, 15, and 20 occupants on the first, second, third and fourth week of a month respectively. Consequently, the energy consumption on four different weeks should be similar intra-week and different inter-week. This example suggests that we should get four clusters corresponding to four weeks and the size of each cluster should be 7 (number of weekdays). So, our uprval is 7 in this example.

I believe now lwrval and uprval limits can be easily interpreted for any application domain. Therefore, according to original LOF paper now we can calculate LOF outlier values on a set of k values defined by lwrval and uprval. In the next post, I will explain the above figure further and show how a particular k value effects outlier score.