tag:blogger.com,1999:blog-30907953245298531762024-03-07T20:28:48.553-08:00Let them CKapil Dalwanihttp://www.blogger.com/profile/13094198937084754843noreply@blogger.comBlogger10125tag:blogger.com,1999:blog-3090795324529853176.post-8781017352491707912014-03-10T12:52:00.004-07:002014-03-10T12:52:40.816-07:00L1 vs L2 RegularizationMy understanding about difference between L1 vs L2 loss regularization and what to use when.<br />
<div>
<br /></div>
<div>
This is more of the place holders for all the articles and I will add explanations later. </div>
<div>
<br /></div>
<div>
<br />
<ol>
<li>http://www.quora.com/Machine-Learning/What-is-the-difference-between-L1-and-L2-regularization</li>
<li>http://www.quora.com/Whats-an-intuitive-way-to-understand-the-difference-between-shrinkage-and-regularization-in-machine-learning-models</li>
<li>http://www.quora.com/Whats-a-good-way-to-provide-intuition-as-to-why-the-lasso-L1-regularization-results-in-sparse-weight-vectors</li>
<li>http://metaoptimize.com/qa/questions/3096/how-to-choose-a-supervised-learning-method#3100</li>
<li>http://www.quora.com/Why-is-L1-regularization-supposed-to-lead-to-sparsity-than-L2</li>
<li>http://www.chioka.in/differences-between-l1-and-l2-as-loss-function-and-regularization/</li>
<li>http://en.wikipedia.org/wiki/Least_absolute_deviations</li>
</ol>
</div>
Kapil Dalwanihttp://www.blogger.com/profile/13094198937084754843noreply@blogger.com0tag:blogger.com,1999:blog-3090795324529853176.post-3974387138151143072012-01-24T23:32:00.000-08:002012-01-24T23:46:58.606-08:00Power of HIVE (HQL, SQL ) vs procedural languageToday my manager asked me to perform a simple task. The nature of task was somewhat easy if you think it in terms of your favorite language like Java, Python etc. My manager asked me to perform the same task in HIVE. <br />
<br />
Consider the following columns in a HIVE table <br />
<code><br />
(key1 string, <br />
key2 string ,<br />
position int, <br />
Value double)<br />
</code><br />
<br />
Here the values of column position ranges from 0-60. <br />
<br />
Now, the task was to find for a given combination of key1,key2 combined all those rows for which the value at pos1 is less than value at pos2, value at pos2 is less than value at pos2, so on and so forth. Note the position should differ by 1 i.e. pos2-pos1 =1. <br />
<br />
In our product, key1 is the source of query(say mobile or web) and key2 is the category code of query(say 800 for restaurants, 9001 for pizza etc) and value is the ctr rate for that source and category code combination at a given position. <br />
<br />
So, to explain it again, the task was to find out those category codes combination for which the ctr value at position 2 is greater than position 1, position 3 greater than position 2 etc.<br />
<br />
In Java this was easy, get all the rows order them by key1, key2, and key3. This will give you the values in an increasing order of a given key combination and in the increasing order of position too. Just write a simple algorithm to find those codes for which value at position 2 is greater than position 1 for a given key1,key2 combination(Leaving that as an exercise for the reader). TADaaa! This was easy. <br />
<br />
Now the major problem was to implement the same logic in HIVE. I will present you the final query here. Its very self explanatory. <br />
<br />
<code><br />
select<br />
distinct<br />
f.key1,<br />
f.key2,<br />
f.position<br />
from<br />
(select key1,key2,position,values from table_a) f<br />
join<br />
(select key1,key2,(position-1) as position,values from table_a) a<br />
on<br />
(f.key1=a.key1 and f.key2=a.key2 and f.position=a.position)<br />
where f.ctr> a.ctr<br />
</code><br />
<br />
In the above query the table joins with itself on same key to key combination. The trick used here to compare values at position 1 with values at position 2, position 2 and position 3 is to subtract 1 from all the rows in the second table and join it with the first table. Thus, later on you can compare the ctr values and print the results.Kapil Dalwanihttp://www.blogger.com/profile/13094198937084754843noreply@blogger.com0tag:blogger.com,1999:blog-3090795324529853176.post-45717696263313078232011-07-05T16:35:00.000-07:002011-07-05T16:37:57.416-07:00Configurations of running Hadoop locally.<div dir="ltr" style="text-align: left;" trbidi="on">This is a follow-up post of my earlier post of "<a href="http://let-them-c.blogspot.com/2011/07/running-hadoop-locally-on-eclipse.html">How to debug Hadoop locally using eclipse</a>".<br />
<br />
In this post I will spec out what all configurations are needed for running different modes of Hadoop locally. I will only cover the local and pseudo distributed mode. The cluster mode is quiet advance and may be more suited for admin. (or may be I don't motivated enough to know about it right now).<br />
<br />
As I mentioned in my previous post, there are three modes of running Hadoop.<br />
a) Local mode<br />
b) Pseudo distributed mode<br />
c) Cluster.<br />
<br />
Two of them Local and Pseudo distributed corresponds to running hadoop locally.<br />
<br />
Only Local mode is suitable for debugging all your mappers and reducer locally. Reason being, each mapper and reducer runs in a single JVM thus giving eclipse an option to debug. This is difficult to do in Pseudo mode.<br />
<br />
The following are the config changes you might to perform for each of the node.<br />
<br />
In case you are interested in debugging mode too, you should all the following line in your $HADOOP_HOME/conf/hadoop-env.sh file.<br />
<blockquote><b><i><span class="s1">export</span></i><span class="s2"> HADOOP_OPTS=</span>"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5002"</b></blockquote><div class="p1">This will put hadoop into debugging mode listening to connection at host:localhost and port : 5002. </div><div class="p1"><span class="Apple-style-span" style="color: black; font-family: Times; font-size: small;"><br />
</span></div><div class="p1"><span class="Apple-style-span" style="color: black; font-family: Times; font-size: small;">Now, changes required to run in various mode:</span></div><div class="p1"><span class="Apple-style-span" style="color: black; font-family: Times; font-size: small;"><br />
</span></div><div class="p1"><span class="Apple-style-span" style="font-size: large;">a) Pseudo-mode:</span>Change the following properties of the 3 files.</div><div class="p1"></div><ol style="text-align: left;"><li><b><i>$HADOOP_HOME/conf/core-site.xml: </i></b></li>
</ol><div><blockquote class="" style="text-align: left;"><property><br />
<blockquote class=""><name>fs.default.name</name><br />
<value>hdfs://localhost:54310</value><br />
<description>The name of the default file system. A URI whose<br />
scheme and authority determine the FileSystem implementation.</description></blockquote></blockquote><div style="text-align: left;"> </property></div><blockquote> This will tell how hadoop how to access the files. Here it is using HDFS mode, the file system under the hood of Hadoop. This can be changed to FTP and other implementation of Hadoop file system. HDFS is one of them.</blockquote><b><i> 2. $HADOOP_HOME/conf/hdfs-site.xml: </i></b><br />
<br />
<div><blockquote><property><br />
<name>dfs.replication</name><br />
<value>1</value><br />
</property></blockquote><blockquote>This will tell how hadoop the number of times it will replicate the files in HDFS. For a pseudo distributed the logical value is 1. You can specify any value here say 2 or 5, but when hadoop daemons runs it will message out with a warning that only 1 is a valid value in this mode. It is smart :) </blockquote></div><div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> <b><i> 3. $HADOOP_HOME/conf/mapred-site.xml: </i></b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"></div><div><blockquote><property><br />
<name><span class="Apple-style-span" style="font-family: Menlo; font-size: 11px;">mapred.job.tracker</span></name><br />
<value><span class="Apple-style-span" style="color: #333333; font-family: 'andale mono', 'lucida console', monospace; font-size: 14px; line-height: 18px; white-space: pre;">localhost:54311</span></value><br />
</property></blockquote><blockquote>This will tell how hadoop the host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task </blockquote></div></div><br />
You can check the status of your job tracker and hdfs name node at the following locations <b style="font-style: italic;">http://localhost:50030/ </b>and<b style="font-style: italic;"> http://localhost:50070/.</b><br />
<br />
<br />
<div class="p1" style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><span class="Apple-style-span" style="color: black; font-family: Times;"><span class="Apple-style-span" style="font-size: large;"><b><i>b) Local-mode:</i></b></span></span></div><div class="p1" style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><span class="Apple-style-span" style="color: black; font-family: Times; font-size: small;">Change the following properties of the 3 files.</span></div><div class="p1" style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"></div><ol style="text-align: left;"><li><b>$HADOOP_HOME/conf/core-site.xml: </b></li>
</ol><div><blockquote><property><br />
<name>fs.default.name</name><br />
<value>file:///</value><br />
<description>The name of the default file system. A URI whose<br />
scheme and authority determine the FileSystem implementation.</description><br />
</property></blockquote><blockquote> Files are accessed locally using the local file system protocol. Remember no name node is running in local node. </blockquote><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><b><i> 2. $HADOOP_HOME/conf/hdfs-site.xml: </i></b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"></div><div><blockquote><property><br />
<name>dfs.replication</name><br />
<value>1</value><br />
</property></blockquote><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> This is irrelevant now, since hdfs is not running for file system. </div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><br />
</div></div><div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"> <b><i> 3. $HADOOP_HOME/conf/mapred-site.xml: </i></b></div><div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"></div><div><blockquote><property><br />
<name><span class="Apple-style-span" style="font-family: Menlo; font-size: 11px;">mapred.job.tracker</span></name><br />
<value><span class="Apple-style-span" style="color: #333333; font-family: 'andale mono', 'lucida console', monospace; font-size: 14px; line-height: 18px; white-space: pre;">local</span></value><br />
</property></blockquote><blockquote> No job tracker here as Hadoop is now running at local mode but no job tracker and data node. </blockquote></div></div></div></div><div style="text-align: left;"><br />
</div><div style="text-align: left;">Use the local mode for debugging stuff in eclipse.</div><div style="text-align: left;">Thank to Michael for the original <a href="http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/">post</a>. </div><br />
</div>Kapil Dalwanihttp://www.blogger.com/profile/13094198937084754843noreply@blogger.com5tag:blogger.com,1999:blog-3090795324529853176.post-87528443774398999712011-07-05T14:35:00.000-07:002011-07-05T16:39:33.060-07:00Debugging Hadoop locally on Eclipse<div dir="ltr" style="text-align: left;" trbidi="on"><br />
I am now a Cloudera's certified Hadoop developer. (CCDH). Yayy!<br />
<br />
After doing all the theoretical work, I had a chance to work on Hadoop at my work. We have 3 different remote environments(namely test, qa and prod.) and its pretty hard to debug your job in Map-Reduce paradigm.<br />
<br />
In this post I will discuss the 3 modes of Hadoop and what one should use to debug stuff locally. In case you are interested in the configuration settings of 3 modes, read my follow-up post <a href="http://let-them-c.blogspot.com/2011/07/configurations-of-running-hadoop.html">here</a>.<br />
<br />
I usually code in eclipse and would like to test my code locally before copying the jar over to any of those machines. Running hadoop locally is easy but debugging it with a debugger is hard.<br />
<br />
Lets start with some intro:-<br />
Hadoop can mainly run in 3 modes(only two of them are locally).<br />
<ol style="text-align: left;"><li><b><i>Standalone (or local) mode:</i></b> There are no daemons running in this mode. When you do a JPS on your terminal, there would be no Job tracker, Name node or other daemons running. Hadoop just uses its local files system as an substitue for hdfs files system. </li>
<li><b><i>Pseudo-distributed mode:</i></b> All daemons runs on a single machine and it mimics the behaviour of cluster. All the daemons runs in your machine locally using the hdfs protocol. </li>
<li><b><i>Fully distributed mode:</i></b> This is the kind of environment you will usually find on test, prod and qa grids. It was 100 of machines with some equal number of cores and the true power of hadoop. As an application developer you would not settup this machine. Its usually the admins folks who set this up. </li>
</ol>Now, you would usually use #3 while you are running your final job with real production data. Some developers also code and test on #3 machines(qa, test, prod). This post is however to run Hadoop in #1 and #2 mode.<br />
<br />
As I mentioned earlier, I like to code and test stuff locally on eclipse before doing the final stuff.<br />
Do to this Hadoop gives you two options #1(local mode) and #2(Pseudo mode).<br />
<br />
<i><b>To debug Hadoop job's you need to make the following configuration: </b></i><br />
<br />
a) In conf folder of your HADOOP_HOME, just add the following line in hadoop-env.sh.<br />
<blockquote><b>export HADOOP_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5002"</b></blockquote>This would put ypur code in Remote Java Application's mode. To run it use the following steps:<br />
a) Click on Debug Configurations in eclipse.<br />
b) Select Remote Java Application on the menu on left.<br />
c) In the host just provide localhost and port should be the one as provided in the address variable above. 5002 in this case. You can choose any valid port number.<br />
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-nVtIpWe6ZxA/ThODIrLq9-I/AAAAAAAAG90/lMYZm8qH2kA/s1600/Screen+shot+2011-07-05+at+2.27.58+PM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="229" src="http://1.bp.blogspot.com/-nVtIpWe6ZxA/ThODIrLq9-I/AAAAAAAAG90/lMYZm8qH2kA/s320/Screen+shot+2011-07-05+at+2.27.58+PM.png" width="320" /></a></div><br />
<b><i>When would you choose #1 over #2: </i></b>#1 and #2 are identical in a way that all the code is running locally. However, you cannot debug your mappers and reducer using mode #2. The main reason here is that in Pseudo mode, each mapper and reducer runs in his own JVM and it impossible to debug them in one instance of eclipse. The only way you can debug your hadoop's mapper and reducer at the full potential is using local mode(#1). Since, all the mappers and reducer runs in single JVM, you can debug your variables easily.<br />
<br />
You would like to run in #2 mode in case you are interested to see how HDFS performs on your machine. In terms of power, I don't think there is difference since you only have one machine. However, local mode is faster as hadoop reads files locally whereas in Pseudo mode it uses hdfs to read local files.<br />
<br />
I will write a follow-up post on what <a href="http://let-them-c.blogspot.com/2011/07/configurations-of-running-hadoop.html">configurations are needed to run under the two modes</a>.(mainly #1 and #2).<br />
<br />
Please feel free to comment.<br />
<br />
</div>Kapil Dalwanihttp://www.blogger.com/profile/13094198937084754843noreply@blogger.com2tag:blogger.com,1999:blog-3090795324529853176.post-81922174687839898092009-10-16T14:06:00.000-07:002009-10-18T16:16:56.173-07:00Unzipping .gz files in JavaUnzipping files in easy, just double click and the OS unzips it using the default application.<br /><br />What in case there are hundred of files in a folder you want to unzip. (We won't go into the details of why we want to unzip the files, when we can directly read the zipped version... That's another story).<br /><br />Task: How to unzip a ".gz" file using Java.<br /><br />Tools: Classes <a href="http://java.sun.com/j2se/1.4.2/docs/api/java/util/zip/GZIPInputStream.html">GZIPInputStream</a> and <a href="http://java.sun.com/j2se/1.4.2/docs/api/java/io/OutputStream.html">OutputStream</a> helps us to do the task.<br /><br /><code><br /></code><div style="text-align: left;"> GZIPInputStream gzipInputStream = new GZIPInputStream(newFileInputStream(inFilename)); <br /> <br />OutputStream out = new FileOutputStream(outFilename);<br /><br />byte[] buf = new byte[102400]; //size can be changed according to programmer's need.<br /> int len;<br /> while ((len = gzipInputStream.read(buf)) > 0) { <br /> out.write(buf, 0, len);<br /> }<br /></div><br /><br /><br />The main points to note here are:-<br />a) GZIPInputStream class which creates a inputStream for the file to be read.<br />b) GZIPInputStream.read(buf) function, reads the uncompressed data into the buffer. The return type of this function is int, which specifies the number of bytes read.<br />c) The data read can be written into a FileOutputStream in uncompressed form.<br /><br /><br />So easy to unzip files.<br /><br />------------------------<br /><br />Kapil DalwaniKapil Dalwanihttp://www.blogger.com/profile/13094198937084754843noreply@blogger.com4tag:blogger.com,1999:blog-3090795324529853176.post-78838769101461035752009-10-16T03:58:00.000-07:002009-10-17T00:32:42.621-07:00Curl (like) implemented in PythonThe purpose of this post is to implement something that mirrors files from the web. In other words, to download set of files from a seed and download them to your local directory.<br /><br />I will use Python to implement the same. There are far better code than using my code. I am implementing this just to get familiarity on Python and some of its library.<br /><br />I am trying to implement something similar to CURL in python.<br /><br />Coming soon. !<br /><br /><br />--------------<br />Kapil DalwaniKapil Dalwanihttp://www.blogger.com/profile/13094198937084754843noreply@blogger.com0tag:blogger.com,1999:blog-3090795324529853176.post-46976362555725250762009-04-24T16:36:00.000-07:002009-10-17T00:32:55.577-07:00XML as objects in OOPsI always wondered how can we represent XML tags using code. And, the answer what I came out with was pretty easy and neat.<br /><br />Each XML begins with a root node and has some nested tags embedded in them. Those tags has more nested tags embedded in between them. Thus they follow a tree like structure.<br />Also, every node has some attributes associated with it. They are nothing but key value properties of that node.<br />Consider an XML data shown above<br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_HHgFQo_xWBc/SfJU8Y6xUfI/AAAAAAAAE34/M7dN1W24Q0c/s1600-h/untitled.bmp"><img style="cursor: pointer; width: 385px; height: 351px;" src="http://3.bp.blogspot.com/_HHgFQo_xWBc/SfJU8Y6xUfI/AAAAAAAAE34/M7dN1W24Q0c/s400/untitled.bmp" alt="" id="BLOGGER_PHOTO_ID_5328414705590555122" border="0" /></a><br />Thus we can represent, each node say root1 as<br />XML root1 = new XML("root1")<br />root1.addAttribue(key1,value1)<br />root1.addAttribue(key2,value2)<br /><br />same can be done for sub-root and root2. But, as sub-root are embedded in root2. We can add another line<br />root2.addChild(sub-root)<br />root2.addChild(sub-root)<br /><br />Now, finally root1 and root2 can be added as children to root.<br />Thus completing the whole tree structure.<br /><br />Refer to this <a href="http://code.google.com/p/xmldomobjects/">page </a>for code and detaills.<br /><br /><br /><br />-----------<br />Kapil DalwaniKapil Dalwanihttp://www.blogger.com/profile/13094198937084754843noreply@blogger.com1tag:blogger.com,1999:blog-3090795324529853176.post-25319924075842751352009-04-06T12:10:00.000-07:002009-04-06T14:49:10.488-07:00Sorting .....Comparison SortSorting is a technique which is defined such as,<br /><br />if A[i] and A[j] are two elements of an array, then the array is sorted if for all i and j<br /><br />A[i] <= A[j] for all i < style="font-weight: bold;"><br /><br /><span style="font-size:130%;">QuickSort.</span></span><br /><br />You can find the algorithm of QuickSort <a href="http://en.wikipedia.org/wiki/Quicksort">here </a>on Wikipedia.<br /><br />I would just touchbase on algo.<br /><br />QuickSort(int start, int end, int * arr){<br /><br /> if(start < end){ <br /> int q = partition(start,end,arr); <br /> QuickSort(start,q-1,arr) <br /> QuickSort(q+1,end,arr) <br /> }<br />}<br /> This is a recursive defintion of QuickSort. The main function is the partition which returns the index of pivot element. The function runs in a way that if Pivot=arr[end] is the element initially chosen as the pivot. Then at the end of function, the pivot is placed at its correct position q. All the elements with index less than q are smaller than Pivot and all elements greater than q are greater than Pivot. During the entire course of algorithm, Pivot at index q doesn't changes its position.<br /><br /><span style="font-weight: bold;font-size:130%;" >Iterative version</span><br /><br />QuickSort(int start,end, arr){<br /><br />bounds = {start,end}<br />pushToStak(bounds);<br /><br />while(notEmpty(){<br /> bound = popStack()<br />while(bound.lowerVal < q =" partition(lower,upperval,arr)"> end-1){<br /> Tempbounds = (start,q-1)<br /> bound.lowerVal = q+1<br /> else{<br /> Tempbounds = {q+1,end)<br /> bound.upperVal = q-1<br /> }<br /> pushStack(Tempbounds )<br /> bound<br /> }<br /> }<br />}<br /><br />You can find the C implementation of this code on code.goggle.com.<br />Download <a href="http://code.google.com/p/sortingquicksort/">QuickSort</a>.Kapil Dalwanihttp://www.blogger.com/profile/13094198937084754843noreply@blogger.com1tag:blogger.com,1999:blog-3090795324529853176.post-51432698091746816372009-04-02T20:11:00.001-07:002009-04-02T23:20:58.989-07:00Binary Tree TraversalThere are 3 basic ways to traverse a Binary TREE:-<br />a) PreOder<br />b) InOrder<br />c) PostOrder<br /><br />And,its very easy to define them recursively<br /><br />PreOrder:-<br /><br />a)Visit the Root<br />b)Visit the Left Sub tree in PreOrder<br />c)Visit the Right Sub tree in Preorder.<br /><br />Same can be done for InOrder and Postorder where Root is visited at stage b) and stage c) respectively. <br /><br />Here is a sample code<br />PreOrder ( Tree tree){<br /><br /> if(tree!=null){<br /> printf(tree->info);<br /> PreOrder(tree->left);<br /> PreOrder(tree->right);<br /> }<br />}<br /><br />Writing recursive definition is easy, <br />writing iterative version is bit tricky. <br /><br />You can find the iterative version of all three <a href="http://code.google.com/p/treetraversal/">here</a>. I have posted them at code.google.com.<br /><br />Feel free to use it and send me your suggestion/comments/feedback.<br /><br />Cheers,<br />Kapil DalwaniKapil Dalwanihttp://www.blogger.com/profile/13094198937084754843noreply@blogger.com0tag:blogger.com,1999:blog-3090795324529853176.post-90434637543239077242009-04-01T10:42:00.000-07:002009-04-02T13:36:47.467-07:00Why Let-Them-C ???Well, to most of the India students who have written some piece of C code, this title would sound similar. Yes, I wanted it to name it "Let Us C" from the famous C language book authored by Yashwant Kanitkar. But, I ddin't wanted to go into legal issues and decided to name it as "Let Them C".. !<br /><br />I will try to cover problems, suggestion, code etc related to technical problems faced by me during professional experience. So, its a blog meant only for Coders...other can do a void return. <br /><br />Feel free to contribute, hack, code etc this website.<br /><br />Cheers,<br />Kapil Dalwani<br /><br />P.S. why is the background black?? Well just my way of showing that I care for Mother Earth....Kapil Dalwanihttp://www.blogger.com/profile/13094198937084754843noreply@blogger.com2