Debarshi Ray: GNOME Terminal: a little something for Fedora 29https://debarshiray.wordpress.com/2018/05/24/gnome-terminal-a-little-something-for-fedora-29/
<p>
Can you spot what that is?
</p>
<p>
<a href="https://debarshiray.files.wordpress.com/2018/05/gnome-terminal-title-auto.gif"><img src="https://debarshiray.files.wordpress.com/2018/05/gnome-terminal-title-auto.gif?w=700" alt="GNOME Terminal: Fedora 29 teaser" class="alignnone size-full wp-image-2413" /></a></p>2018-05-24T10:14:53+00:00Debarshi RayKartik Mistry: અલવિદા વિનોદ ભટ્ટ!https://kartikm.wordpress.com/2018/05/24/goodbye-vinod-bhatt/
<p>* વિનોદ ભટ્ટે ગઇકાલે એટલે કે ૨૩મી મે એ બધાંને બાય-બાય કર્યું અને ગુજરાતી હાસ્યસાહિત્યનો એક યુગ પૂરો થયો એમ કહી શકાય. વિનોદ ભટ્ટ અને બકુલ ત્રિપાઠી અને અશોક દવે – આ ત્રણ મારા માનીતા હાસ્ય લેખકો છે. </p>
<p>વિનોદ ભટ્ટ એક વખત અકસ્માતે મળી ગયા ત્યારે ઓળખી શક્યો નહોતો (તેઓ સેલ્સ ટેક્સ કે પછી એવા કોઇ વિભાગમાં હતા ત્યારે કોઇ નિમંત્રણ આપવા મામાના ઘરે આવેલા). પછી ખબર પડી કે તેઓ વિનોદ ભટ્ટ હતા!</p>
<p>હવે વિકિપીડિયામાં તેમના લેખમાં વધુ માહિતી ઉમેરીને તેમને શ્રદ્ધાંજલિ અર્પવામાં આવશે.</p>2018-05-24T05:21:37+00:00કાર્તિકSayamindu Dasgupta: Testing the “wide walls” design principle in the wildhttps://unmad.in/blog/2018/05/testing-the-wide-walls-design-principle-in-the-wild/
<p>Seymour Papert is credited as saying that tools to support learning should have “high ceilings” and “low floors.” The phrase is meant to suggest that tools should allow learners to do complex and intellectually sophisticated things but should also be easy to begin using quickly. Mitchel Resnick extended the metaphor to argue that learning toolkits should also have <a href="https://design.blog/2016/08/25/mitchel-resnick-designing-for-wide-walls/">“wide walls”</a> in that they should appeal to diverse groups of learners and allow for a broad variety of creative outcomes. In <a href="https://dl.acm.org/citation.cfm?id=3173935">a new paper</a>, <a href="https://mako.cc/academic/">Benjamin Mako Hill</a> and I attempted to provide the first empirical test of Resnick’s wide walls theory. Using a natural experiment in the Scratch online community, we found causal evidence that “widening walls” can, as Resnick suggested, increase both engagement and learning.</p>
<p>Over the last ten years, the “wide walls” design principle has been widely cited in the design of new systems. For example, Resnick and his collaborators relied heavily on the principle in the design of the <a href="https://scratch.mit.edu">Scratch</a> programming language. Scratch allows young learners to produce not only games, but also interactive art, music videos, greetings card, stories, and much more. As part of that team, I was guided by “wide walls” principle when I designed and implemented the <a href="https://en.scratch-wiki.info/wiki/Cloud_Data">Scratch cloud variables system</a> in 2011-2012.</p>
<p>While designing the system, I hoped to “widen walls” by supporting a broader range of ways to use variables and data structures in Scratch. Scratch cloud variables extend the affordances of the normal Scratch variable by adding <em>persistence</em> and <em>shared-ness</em>. A simple example of something possible with cloud variables, but not without them, is a global high-score leaderboard in a game (example code is below). After the system was launched, I saw many young Scratch users using the system to engage with data structures in new and incredibly creative ways.</p>
<figure style="width: 40%;">
<img src="https://unmad.in/images/blog/2018/testing-wide-walls/cloud-variable-script.png" alt="cloud variable script" />
Example of Scratch code that uses a cloud variable to keep track of high-scores among all players of a game.
</figure>
<p>Although these examples reflected powerful anecdotal evidence, I was also interested in using quantitative data to reflect the causal effect of the system. Understanding the causal effect of a new design in real world settings is a major challenge. To do so, we took advantage of a “natural experiment” and some clever techniques from econometrics to measure how learners’ behavior changed when they were given access to a wider design space.</p>
<p>Understanding the design of our study requires understanding a little bit about how access to the Scratch cloud variable system is granted. Although the system has been accessible to Scratch users since 2013, new Scratch users do not get access immediately. They are granted access only after a certain amount of time and activity on the website (the specific criteria are not public). Our “experiment” involved a sudden change in policy that altered the criteria for who gets access to the cloud variable feature. Through no act of their own, more than 14,000 users were given access to feature, literally overnight. We looked at these Scratch users immediately before and after the policy change to estimate the effect of access to the broader design space that cloud variables afforded.</p>
<p>We found that use of data-related features was, as predicted, increased by both access to and use of cloud variables. We also found that this increase was not only an effect of projects that use cloud variables themselves. In other words, learners with access to cloud variables—and especially those who had used it—were more likely to use “plain-old” data-structures in their projects as well.</p>
<p>The graph below visualizes the results of one of the statistical models in our paper and suggests that we would expect that 33% of projects by a prototypical “average” Scratch user would use data structures if the user in question had never used used cloud variables but that we would expect that 60% of projects by a similar user would if they <em>had</em> used the system.</p>
<figure style="width: 70%;">
<img src="https://unmad.in/images/blog/2018/testing-wide-walls/graph.png" alt="probability graph" />
Model-predicted probability that a project made by a prototypical Scratch user will contain data structures (w/o counting projects with cloud variables)
</figure>
<p>It is important to note that the estimated effective above is a “local average effect” among people who used the system because they were granted access by the sudden change in policy (this is a subtle but important point that we explain this in some depth in the paper). Although we urge care and skepticism in interpreting our numbers, we believe our results are encouraging evidence in support of the “wide walls” design principle.</p>
<p>Of course, our work is not without important limitations. Critically, we also found that rate of adoption of cloud variables was very low. Although it is hard to pinpoint the exact reason for this from the data we observed, it has been suggested that widening walls may have a potential negative side-effect of making it harder for learners to imagine what the new creative possibilities might be in the absence of targeted support and scaffolding. Also important to remember is that our study measures “wide walls” in a specific way in a specific context and that it is hard to know how well our findings will generalize to other contexts and communities. We discuss these caveats, as well as our methods, models, and theoretical background in detail in our paper which now available for <a href="https://dl.acm.org/citation.cfm?id=3173935">download as an open-access piece</a> from the <span class="caps">ACM</span> digital library.</p>
<hr />
<p><em>This blog post, and <a href="https://dl.acm.org/citation.cfm?id=3173935">the open access paper</a> that it describes, is a collaborative project with <a href="https://mako.cc/academic/">Benjamin Mako Hill</a>. Financial support came from the eScience Institute and the Department of Communication at the University of Washington. Quantitative analyses for this project were completed using the Hyak high performance computing cluster at the University of Washington.</em></p>2018-05-23T04:00:00+00:00Sayamindu DasguptaShrinivasan: Project Idea – Call for Contributors – web scrapping West Bengal Public Library Networkhttps://goinggnu.wordpress.com/2018/05/23/project-idea-call-for-contributors-web-scrapping-west-bengal-public-library-network/
<div>
<div>
<div>
<div>Hello all,<p></p>
</div>
<p>We have bengali wikisource friends requesting for <span class="il">web</span> <span class="il">scrapping</span> PDF files from a dspace based <span class="il">library</span>.</p>
<p><a href="http://dspace.wbpublibnet.gov.in:8080/jspui/" target="_blank" rel="noopener">http://dspace.wbpublibnet.gov.in:8080/jspui/</a></p>
</div>
</div>
<div>The site seems down some times. But will be up in few hours.</div>
<div></div>
<div>Can any one contribute to this project?<p></p>
</div>
<div></div>
<p>If you are interested, reply here or mail to me on tshrinivasan@gmail.com</p>
</div>
<p>Thanks.</p>2018-05-22T20:23:19+00:00tshrinivasanShrinivasan: 30 Project Ideas for contributing to Indic Wikipedia Projectshttps://goinggnu.wordpress.com/2018/05/22/30-project-ideas-for-contributing-to-indic-wikipedia-projects/
<p>Last week, I had an interesting meeting with Panjabi Wikimedian community and CIS-A2K team.</p>
<p>Panjabi wikimedia community is small in count. But each of them are contributing with their best. Many of them doing 100-days-of-wiki, personal wiki edithathon for 100 days. Few of them do in in multiple sites and many times a year.</p>
<p>Their interest on contribution and passion on their language is awesome.</p>
<p>Interacted on wikisource, wiktionary and wikipedia. Shared many ideas to improve their workflow. They are looking for many tools to automate their tasks. Those tools will be useful for all wiki communities.</p>
<p>Then, had some great discussions with CIS-A2K team. We spoke about many interesting project ideas.<br />
Listing them all the ideas here.</p>
<p>1. List down the Top 10 tricks/hacks/must know on any wikisource project</p>
<p>2. Make simple tutorials on how to start contributing to wiki, in all possible languages. Still we dont have an ebook or easy starter guide in Tamil. There may be video tutorials. curate them and show them in better way to find them easily.</p>
<p>3. Telegram bot to proofread wikisource contents. Get a page from wikisource. split it into lines, then words. Show a word and OCRed content in a telegrambot. User should verify or type the correct spelling in telegram itself. Submit the changes to wikisource. Thus, we can make the collaborated proofreading easily.</p>
<p>4. Explore how to use flickr for helping photographers to donate their photos for commons. Flickr is easy for them to upload and showcase. From there, we should move the photos to commons. Few tools are already available. Explore them and train them for photographers.</p>
<p>5. We should celebrate the volunteers who contribute to wiki. By events, news announcements, interviews etc. CIS may explore this.</p>
<p>6. Web application for OCR4WikiSource</p>
<p>7. Make a web application to record audio and upload to commons and add in wiktionary words. explore Lingua-Libre for web app.</p>
<p>8. Make a mobile application to record audio and upload to commons and add in wiktionary words.</p>
<p>9. CIS may ask the language based organizations to give their works/tools on public licenses.</p>
<p>10. A one/two day meeting/conference to connect various language technologies. Each team can demonstrate the tools they are working on. others can learn and use them for their languages. CIS may organize this soon.</p>
<p>11. Building spell checkers for Tamil. Learn how other other languages are doing. Odia seems to have good spell checker. Explore that.</p>
<p>12. For iOS, there is no commons app to upload photos. It was there sometime ago. Fix the iOS commons app and rerelease it again.</p>
<p>13. Build Maps with local languages with OSM.</p>
<p>14. One/Two day training on wiki tech. like gadgets, tools, toolserver, API, etc</p>
<p>15. Tweet marketing to promote the ebooks released in wikisource projects. Measure the downloads.</p>
<p>16. CIS may talk with amazon to release the ebooks from wikisource for free always at amazon.</p>
<p>17. Explore Valmigi project of malayalam, chikubuku of kannada – for their ebooks.</p>
<p>18. Download ebooks from dspace, bengali books – West Bengal Public Library Network – url – <a href="http://dspace.wbpublibnet.gov.in:8080/jspui/" rel="nofollow">http://dspace.wbpublibnet.gov.in:8080/jspui/</a></p>
<p>19. Explore paid works for wikisource proofreading.</p>
<p>20. Blog on how ta wikisource for 2000 ebooks from TN government in public domain license. Send to CIS. They may try to do the same for other languages.</p>
<p>21. ASI website has info about all monuments. Scrap them all and add in wiki.</p>
<p>22. Scrap details from tourism sites and add in wiki.</p>
<p>23. Kannada archeology site has tons of images but with 3 seals added in all images. scrap them, remove seal and add to commons.</p>
<p>24. Tool to audit wiki sites. like new users, edits, measurements, KPIs, reports etc.</p>
<p>25. Discuss with wiki writers and help them to automate their tasks. Build new tools to help them. train existing tools.</p>
<p>26. Get existing photos from many photographers. Get license doc. Add in OTRS. Have a team to upload the photos to commons.</p>
<p>27. Find the pages that don’t have images. Search in commons and add 1 image automatically.</p>
<p>28. Infobox in wiki pages may have 1 image. Check for the same page in other languages.. get the image from infobox and use it in missing pages.</p>
<p>29. Tito showed a broken JS script. Explore it and fix it.</p>
<p>30. Discuss with victor and google team to improve the OCR feature and integrating with wikisource. Explore existing tools like <a href="http://tools.wmflabs.org/ws-google-ocr/" rel="nofollow">http://tools.wmflabs.org/ws-google-ocr/</a> and <a href="https://wikisource.org/wiki/Wikisource:Google_OCR" rel="nofollow">https://wikisource.org/wiki/Wikisource:Google_OCR</a></p>
<p> </p>
<p>Thanks to Ravi, Tito, Tanveer, Dan, Charan Singh, Manavpreet, Rupika,Gurlaal, Stain for the interesting meeting and great ideas.</p>
<p>We can work on these ideas and implement them soon.</p>
<p>If you are interested in doing any of the ideas, reply here or mail me on tshrinivasan@gmail.com</p>
<p> </p>2018-05-22T05:57:19+00:00tshrinivasanKartik Mistry: 102 નોટ આઉટhttps://kartikm.wordpress.com/2018/05/14/102-notout/
<p>ગઇકાલે સહ કુટુંબ <i>102 નોટ આઉટ</i> માણવામાં આવ્યું. 3ડી ન હોય એવી ફિલ્મો જોવાનું વધતું જાય છે એ સારી વાત છે. ચલ મન જીતવા જઇએ પછી લાંબા વિરામ પછી સરસ બ્રેક મળ્યો.</p>
<p>ફિલ્મની આડઅસર રૂપે સૌમ્ય જોશી વિશે ગુજરાતી વિકિપીડિયામાં લેખ અનુવાદ કરી રહ્યો છું.</p>
<p>હા, 77 વર્ષ બાકી છે પેલા ચીનાનો રેકોર્ડ તોડવામાં. <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f609.png" style="height: 1em;" alt="😉" class="wp-smiley" /></p>2018-05-14T13:36:27+00:00કાર્તિકJaikiran Pai: Apache Ivy 2.5.0-rc1 released - Now allows timeouts on resolvershttps://jaitechwriteups.blogspot.com/2018/05/apache-ivy-250-rc1-released-now-allows.html
<div style="text-align: left;" dir="ltr">A few weeks back, we released the 2.5.0-rc1 version of Apache Ivy. Apache Ivy is a dependency management build tool, which usually is a used in combination with Apache Ant. The download is available on the <a href="http://ant.apache.org/ivy/download.cgi">project download page</a><br /><br />This release is significant since the last release of Apache Ivy was way back in December 2014. So it's more than 3 years since the last official years. During these past few years, the project development stalled for a while. I use Apache Ivy in some of our projects and have been pretty happy with the tool. It's never a good sign to see one of your heavily used tools to be no longer under development or even have bug fixes. So a year or so back, I decided to contribute some bug fixes to the project. Over time, the project management committee invited me to be part of the team.<br /><br />We decided that the first obvious, immediate goal would be to revive the project and do a formal release with bug fixes. This 2.5.0-rc1 is the result of that effort which started almost a year back. A lot of changes have gone into this release and also a good number of enhancements have made it into this release. This release has been a result of contributions from various different members from the community. The complete list of release notes is available <a href="https://ant.apache.org/ivy/history/2.5.0-rc1/release-notes.html">here</a><br /><br />We intentionally named this release 2.5.0-rc1 (release candidate) since it's been a while we have done an official release and also given the nature of changes. Please give this release a try and let us know how it goes. Depending on the feedback, we will either release 2.5.0 or 2.5.0-rc2. As usual, some of us from the development team keep an active watch in the ivy user mailing <a href="http://ant.apache.org/ivy/mailing-lists.html">list</a>. So if you have any feedback or questions, please do drop a mail to us, there.<br /><br />Now coming to one of the enhancements in this release - there's been more than one. One of the issues I personally had was if the repository, backing a dependency resolver configured for Ivy, had some connectivity issues, the build would just hang. This was due to the inability to specify proper timeouts for communicating with these repositories through the resolver. As of this release, Ivy now allows you to configure timeouts for resolvers. This is done through the use of (the new) timeout-constraints element in your Ivy settings file. More details about it are <a href="https://ant.apache.org/ivy/history/2.5.0-rc1/settings/timeout-constraints.html">here</a>. Imagine you have a url resolver which points to some URL. The URL resolver would typically look something like:<br /><br /><pre><code>&lt;url name="foo"&gt;<br /> &lt;ivy pattern=.../&gt;<br /> &lt;artifact pattern=.../&gt;<br /> &lt;artifact pattern=.../&gt;<br />&lt;/url&gt;</code></pre><br /><br /><br />Let's now try and configure a connection timeout for this resolver. The first thing you would do is define a named timeout-constraint, like below:<br /><br /><pre><code>&lt;timeout-constraints&gt;<br /> &lt;timeout-constraint name="timeout-1" connectionTimeout="60000" /&gt;<br />&lt;/timeout-constraints&gt;</code></pre><br /><br />The value for the name attribute can be anything of your choice. The value for connectionTimeout attribute is represented as a timeout in milli seconds. In the above example, we configure the "timeout-1" timeout-constraint to be of 1 minute. You can even specify a readTimeout which too is in milli seconds. More about this element can be found in the <a href="https://ant.apache.org/ivy/history/2.5.0-rc1/settings/timeout-constraint.html">documentation</a>.<br /><br />As you might notice, we have just defined a timeout-constraint here but haven't yet instructed Ivy to use this constraint for some resolver. We do that in the next step, where we set the "timeoutConstraint" attribute on the URL resolver that we had seen before:<br /><br /><br /><pre><code>&lt;url name="foo" timeoutConstraint="timeout-1"&gt;<br /> &lt;ivy pattern=.../&gt;<br /> &lt;artifact pattern=.../&gt;<br /> &lt;artifact pattern=.../&gt;<br />&lt;/url&gt;</code></pre><br /><br />Notice that the value of "timeoutConstraint" attribute now points to "timeout-1" which we defined to have a 1 minute connection timeout. With this, when this URL resolver gets chosen by Ivy for dependency resolution, this connection timeout will be enforced and if the connections fails to be established within this timeout, then an exception gets thrown instead of the build hanging forever.<br /><br />Although the example uses a URL resolver to setup the timeout constraint, this feature is available for all resolvers that are shipped out of the box by Ivy. So you can even use it with the ibiblio resolver (which communicates with Maven central) too.<br /><br /><br />Like I noted earlier, please do give this release a try and let us know how it goes.<br /><br /></div>2018-05-14T06:35:46+00:00JaikiranShrinivasan: Project Idea – Need a web interface for Tamil TTShttps://goinggnu.wordpress.com/2018/05/12/project-idea-need-a-web-interface-for-tamil-tts/
<p>Hello all,</p>
<p> </p>
<p>The Tamil TTS system provided by IITM and SSN College of Engineering had one issue. It can convert one tamil string to audio at one time.</p>
<p><a href="https://github.com/tshrinivasan/tamil-tts-install" rel="nofollow">https://github.com/tshrinivasan/tamil-tts-install</a></p>
<p>Because of this, we can not do parallel conversion. Few full text books took 4-5 hours for conversion. Because of this, we could not make it as a web application for public use.</p>
<div style="width: 1392px;" class="wp-caption alignnone"><img src="https://lh3.googleusercontent.com/XYurAfXasgzVA7YXEnWfAj8M5rI7Gl2y6Ku5wxNhK8btXT1Y7pnpQD6pyX_Nhk4gEwjBpLvQXc5YCnLSH-Bl3RHaAuSKzeBbnjrJ0T_HwSLkw4ovp8jANcIdQG_VyT1uUa08HWixt7CSS0QppxXZaA9g27BBnEMu8Ixrhl-m9jJFEbxSkHIy-Yos2ZM0nDNH4oVC7wdpK3oCpoe-UjqFrmWs28xLrcUsnwENcOWRKIlaveNjdCaW2AjtV86oqFzI5cvSCB2NiHaQA_mETdVaXD7iW83XdoKc6A8jIcT-NJVo5CE-xLsOClstQ0VD1ZRkIyIIunZOBogji0VEggCPS51cWGBPzv_r_KFDvdsbbTOOR0yb3BOS3Cz7HLv-FUkPIt_z5Ryde1N0clMT8KndFXszZrCTLL3UBpHGI_oTk571OsefC4DdktExeeMGjYcg9KWmApLklze0ufTRH6qe5M5U7DG1RwTNbR6g6E_dH-IUYMleqh12kAPXn0VlvSNaE0O72U3_JX6LwCIYg_18N2Ho3FgWK8xKnEjZYwL8JRcbgPVs5LEIoTlkdw5ZeYipDP92ty35QXH5v1LmKq0zRfjpW48kG6BtgjpWLeQ=w1382-h777-no" width="1382" class="SzDcob eGKoR" height="777" /><p class="wp-caption-text">Mohan</p></div>
<p>Mohan helped to make the Tamil TTS Script simpler to process multiple<br />
conversion simultaneously.</p>
<p>Here is the super script that does magics.</p>
<p><a href="https://github.com/mohan43u/tamil-tts-install" target="_blank" rel="noreferrer noopener">https://github.com/mohan43u/tamil-tts-install</a></p>
<p>Thanks Mohan for your great works.</p>
<p>now, we need to convert this as a web application, so that anyone can<br />
use it easily.</p>
<p>The requirements are below.<br />
1. user registration with gmail<br />
2. user should upload a tamil text file<br />
3. once it is converted, user should receive an email with the link to<br />
the audio file<br />
4. we can keep the audio files for 1 week<br />
5. REPT API support with authentication<br />
6. A queue system</p>
<p>All these will be released in GPL.</p>
<p>If you are interested in doing this, reply here or write to me.</p>2018-05-11T19:22:53+00:00tshrinivasanJaikiran Pai: VMWare vijava - The curious case of "incorrect user name or password" exceptionhttps://jaitechwriteups.blogspot.com/2018/05/vmware-vijava-curious-case-of-incorrect.html
<div style="text-align: left;" dir="ltr">In one of the projects I have been involved in, we use <a href="http://www.yavijava.com/">yavijava</a> (which is a fork of <a href="https://sourceforge.net/p/vijava/">vijava</a>) library to interact with vCenter which hosts our VMs. vCenter exposes various APIs through their webservice endpoints which are invoked through HTTP(s). The yavijava library has necessary hooks which allows developers to use a HTTP client library of their choice on the client side to handle invocations to the vCenter.<br /><br />In our integration, we plugged in the <a href="https://hc.apache.org/httpcomponents-client-ga/">Apache HTTP client library</a>, so that the yavijava invocations internally end up using this HTTP library for interaction. Things mostly worked fine and we were able to invoke the vCenter APIs. I say mostly, because every once in a while we kept seeing exceptions like:<br /><br />InvalidLogin : Cannot complete login due to an incorrect user name or password.<br /><br />This was puzzling since we were absolutely sure that the user name and password we use to interact with the vCenter was correct. Especially since all of the previous calls were going through fine, before we started seeing these exceptions.<br /><br />The exception stacktrace didn't include anything more useful and neither did any other logs. So the only option that I was left with was to go look into the vCenter (server side) event logs to see if I can find something. Luckily, I had access to a setup which had a vSphere client, which I then used to connect to the vCenter. The vSphere client allows you to view the event logs that were generated on the vCenter.<br /><br />Taking a look at the logs, showed something interesting and useful. Every time, we had run into this "incorrect user name or password" exception on the client side, there was a corresponding event log on the vCenter server side at INFO level which stated "user cannot logon since user is already logged on". That event log was a good enough hint to give an idea of what might be happening.<br /><br />Based on that hint, the theory I could form was, somehow for an incoming (login) request, vCenter server side notices something on the request which gives it an impression that the user is already logged in. Given my background with Java EE technologies, the immediate obvious thing that came to mind was that the request was being attached with a "Cookie" which the server side uses to associate requests against a particular session. Since I had access to the client side code which was issuing this login request, I was absolutely sure that the request did not have any explicitly set Cookie header. So that raised the question, who/where the cookie was being associated with the request. The only place that can happen, if it's not part of the request we issued, is within the HTTP client library. Reading up the documentation of Apache HTTP client library confirmed the theory that the HTTP client was automagically associating a (previously generated) Cookie against the request. <br /><br />More specifically, the HTTP client library uses pooled connections. When a request is made, one of the pooled connections (if any) gets used. What was happening in this particular case was that, a previous login would pick up connection C1 and the login would succeed. The response returned from vCenter for that login request would include a Cookie set in the response header. The Apache HTTP client library was then keeping track of this Cookie against the connection that was used. Now when a subsequent login request arrived, if the same pooled connection C1 gets used for this request, then the HTTP client library was attaching the Cookie that it kept track against this connection C1, to this new request. As a result, vCenter server side ends up seeing that the incoming login request has a Cookie associated with it, which says that there's already a logged in session for that request. Hence, that INFO message in the event logs of vCenter. Of course, the error returned isn't that informative and in fact a bit misleading since it says the username/password is incorrect.<br /><br />Now that we know what's going on, the solution was pretty straightforward. Apache HTTP client library allows you to configure Cookie policy management. Since in our case, we wanted to handle setting the Cookie explicitly on the request, we decided to go with the "ignoreCookies" policy which can be configured on the HTTP client. More about this can be found in the <a href="https://hc.apache.org/httpclient-3.x/cookies.html">HTTP client library documentation</a> (see the "Manual Handling of Cookies" section). Once we did this change, we no longer saw this exception anymore.<br /><br /><br />There isn't much information about this issue anywhere that I could find. The closest I could find was this forum thread <a href="https://sourceforge.net/p/vijava/discussion/826592/thread/91550e2a/">https://sourceforge.net/p/vijava/discussion/826592/thread/91550e2a/</a>. It didn't have a conclusive solution, but it does appear that it's the same issue that the user there was running into (almost 7 years back!)<br /><br /></div>2018-05-11T15:18:49+00:00JaikiranDebarshi Ray: GNOME Terminal: separate menu items for opening tabs and windowshttps://debarshiray.wordpress.com/2018/05/11/gnome-terminal-separate-menu-items-for-opening-tabs-and-windows/
<p>Astute users might have noticed that the <a href="https://wiki.gnome.org/Apps/Terminal/">GNOME Terminal</a> binary distributed by <a href="https://getfedora.org/en/workstation/">Fedora</a> has separate menu items for opening new tabs and windows, while the vanilla version available from GNOME doesn’t.</p>
<div style="width: 451px;" class="wp-caption aligncenter" id="attachment_2409"><img src="https://debarshiray.files.wordpress.com/2018/05/gnome-terminal-menuitems-tabs-windows.png?w=700" alt="gnome-terminal-menuitems-tabs-windows" class="alignnone size-full wp-image-2409" /><p class="wp-caption-text">With separate menu items</p></div>
<p>This has been the case since Fedora 25 and was achieved by a <a href="https://github.com/debarshiray/gnome-terminal/commits/gnome-3-22-ntfy-opn-ttl-ts">downstream patch</a> that reverted two <a href="https://git.gnome.org/browse/gnome-terminal/commit/?id=99fc013">upstream</a> <a href="https://git.gnome.org/browse/gnome-terminal/commit/?id=1fecaa3">commits</a>.</p>
<div style="width: 451px;" class="wp-caption aligncenter" id="attachment_2410"><img src="https://debarshiray.files.wordpress.com/2018/05/gnome-terminal-menuitems-unified-tabs-windows.png?w=700" alt="gnome-terminal-menuitems-unified-tabs-windows" class="alignnone size-full wp-image-2410" /><p class="wp-caption-text">Without separate menu items</p></div>
<p>I am happy to say that since version 3.28 GNOME Terminal has regained the ability to have separate menu items as a compile time option. The <i>gnome-terminal-server</i> binary needs to be built with the <i>DISUNIFY_NEW_TERMINAL_SECTION</i> pre-processor macro defined. <a href="https://github.com/debarshiray/gnome-terminal/commit/57f235b31cdf0a19c13a6cbb808ad5ef0865f62b">Here’s</a> one way to do so.</p>2018-05-11T12:42:05+00:00Debarshi RayKushal Das: SecureDrop development sprint in PyCon 2018https://kushaldas.in/posts/securedrop-development-sprint-in-pycon-2018.html
<p><img src="https://kushaldas.in/images/pycon18logo.png" alt="" /></p>
<p><a href="https://securedrop.org">SecureDrop</a> will take part in <a href="https://us.pycon.org">PyCon
US</a> development sprints (from 14th to 17th May). This
will be first time for the SecureDrop project to present in the sprints.</p>
<p>If you never heard of the project before, SecureDrop is an open source
whistleblower submission system that media organizations can install to
securely accept documents from anonymous sources. Currently, dozens of news
organizations including The Washington Post, The New York Times, The Associated
Press, USA Today, and more, use SecureDrop to preserve the anonymous tipline in
an era of mass surveillance. SecureDrop is installed on-premises in the news
organizations, and journalists and source both use a web application to
interact with the system. It was originally coded by the late Aaron Swartz and
is now managed by <a href="https://freedom.press">Freedom of the Press Foundation</a>.</p>
<h2>How to prepare for the sprints</h2>
<p>The source code of the project is hosted on
<a href="https://github.com/freedomofpress/securedrop">Github</a>.</p>
<p>The web applications, administration CLI tool, and a small Qt-based GUI are all
written in Python. We use Ansible heavily for the orchestration. You can setup
the development environment using Docker. <a href="https://docs.securedrop.org/en/latest/development/setup_development.html">This
section</a>
of the documentation is a good place to start.</p>
<p>A good idea would be to create the initial Docker images for the development
before the sprints. We have marked many issues for PyCon Sprints and also there
are many documentation issues.</p>
<p>Another good place to look is the tests directorty. We use pytest for most of
our test cases. We also have Selenium based functional tests.</p>
<h2>Where to find the team?</h2>
<p><a href="https://gitter.im/freedomofpress/securedrop">Gitter</a> is our primary
communication platform. During the sprint days, we will in the same room of the
CPython development (as I will be working on both).</p>
<p>So, if you are in PyCon sprints, please visit us to know more and maybe, start
contributing to the project while in sprints.</p>2018-05-09T17:55:00+00:00Kushal DasSwaroop C H: PodSynchttps://swaroopch.com/2018/05/07/link-podsync/
<p><a href="http://podsync.net">PodSync.net</a> converts a YouTube channel into a podcast.</p>
<p>This is a beautiful bridge that lets me listen to <a href="https://www.patreon.com/confused/overview">DJ mixes by “Confused bi-Product of a Misinformed Culture”</a> without having to use YouTube.</p>2018-05-07T07:17:14+00:00swaroopRajeesh K Nambiar: Adventures in upgrading to Fedora 27/28 using ‘dnf system-upgrade’https://rajeeshknambiar.wordpress.com/2018/05/03/adventures-in-upgrading-to-fedora-27-28-using-dnf-system-upgrade/
<p><em>[This post was drafted on the day Fedora 27 released, about half a year ago, but was not published. The issue bit me again with Fedora 28, so documenting it for referring next time.]</em></p>
<p>With <code>fedup</code> and subsequently <code>dnf</code> improving the upgrade experience of Fedora for power users, last few system upgrades have been smooth, quiet, even unnoticeable. That actually speaks volumes of the maturity and user friendliness achieved by these tools.</p>
<p>Upgrading from Fedora 25 to 26 was so event-less and smooth (btw: I have installed and used every version of Fedora from its inception and the default wallpaper of Fedora 26 was the most elegant of them all!).</p>
<p>With that, on the release day I set out to upgrade the main workstation from Fedora 26 to 27 using <code>dnf system-upgrade</code> as <a href="https://fedoraproject.org/wiki/DNF_system_upgrade" target="_blank" rel="noopener">documented</a>. Before downloading the packages, dnf warned that upgrade cannot be done because of package dependency issues with <code>grub2-efi-modules</code> and <code>grub2-tools</code>.</p>
<h4>Things go wrong!</h4>
<p>I simply removed both the offending packages and their dependencies (assuming were probably installed for the <code>grub2-breeze-theme</code> dependency, but <code>grub2-tools</code> actually provides <code>grub2-mkconfig</code>) and proceeded with <code>dnf upgrade --refresh</code> and <code>dnf system-upgrade download --refresh --releasever=27</code>. If you are attempting this, <strong>don’t</strong> remove the grub2 packages yet, but read on!</p>
<p>Once the download and check is completed, running <code>dnf system-upgrade reboot</code> will cause the system reboot to upgrade target and actual upgrade happen.</p>
<p>Except, I was greeted with EFI MOK (Machine Owner Key) screen on reboot. Now that the grub2 bootloader is broken thanks to the removal of <code>grub2-efi-modules</code> and other related packages, a recovery must be attempted.</p>
<h4>Rescue</h4>
<p>It is important to have a (possibly EUFI enabled) live media where you can boot from. Boot into the live media and try to reinstall grub. Once booted in, mount the root filesystem under <code>/mnt/sysimage</code>, and EFI boot partition at <code>/mnt/sysimage/boot/efi</code>. Then <code>chroot /mnt/sysimage</code> and try to reinstall <code>grub2-efi-x64</code> and <code>shim</code> packages. If there’s no network connectivity, don’t despair, <code>nmcli</code> is to your rescue. Connect to wifi using <code>nmcli device wifi connect &lt;ssid&gt; password &lt;wifi_password&gt;</code>. Generate the boot configuration using <code>grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg</code> followed by actual install <code>grub2-install --target=x86_64-efi /dev/sdX</code> (the –target option ensures correct host installation even if the live media is booted via legacy BIOS). You may now reboot and proceed with the upgrade.</p>
<p>But this again failed at the upgrade stage because of <code>grub</code> package clash that <code>dnf</code> warned earlier about.</p>
<h4>Solution</h4>
<p>Once booted into old installation, take a backup of the <code>/boot/</code> directory, remove the conflicting <code>grub</code> related packages, and copy over the backed up <code>/boot/</code> directory contents, especially <code>/boot/efi/EFI/fedora/grubx64.efi</code>. Now rebooting (using <code>dnf system-upgrade reboot</code>) had the grub contents intact and the upgrade worked smoothly.</p>
<p>For more details on the package conflict issue, follow this <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1491624" target="_blank" rel="noopener">bug</a>.</p>2018-05-03T07:16:55+00:00RajeeshSwaroop C H: “Notes on structured concurrency”https://swaroopch.com/2018/05/01/link-concurrency/
<p><a href="https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/">https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/</a></p>
<p>Such clear thinking and elucidation around concurrency!</p>2018-05-01T20:00:46+00:00swaroopKartik Mistry: છૂંદોhttps://kartikm.wordpress.com/2018/05/01/%e0%aa%9b%e0%ab%82%e0%aa%82%e0%aa%a6%e0%ab%8b/
<p><img src="https://kartikm.files.wordpress.com/2018/05/img_20180501_092829636546043102689971.jpg?w=249&amp;h=332" width="249" class="aligncenter wp-image-8047" height="332" /></p>
<p>દર વર્ષની જેમ પણ આ વખતે “કેરીનો છૂંદો” બનાવવાની પ્રક્રિયાનો આરંભ થઇ ગયો છે. મને યાદ છે કે પહેલાં રાજાપુરી કેરીઓ ઘરે લાવીને જાતે છીણીને છૂંદો બનાવાતો હતો. હવે, છીણ બજારમાં તૈયાર મળે છે એટલે ઝંઝટ ઓછી. તો પણ, દરરોજ તડકામાં છૂંદો મૂકવા અને લેવા જવાનું તો હોય છે. મૂકવા જવાનું તો ઠીક, લેવા જવાનું યાદ આવે એ માટે ખાસ એલાર્મ મૂકવામાં આવે છે. ગયા વર્ષે ધાબા પરથી છૂંદો લાવવાનું ભૂલી ગયા હતા ત્યારે ૧.૩૦ વાગે રાત્રે યાદ આવ્યું અને અમારે જવું પડ્યું હતું. ખતરો એ કે રાત્રે કોઇ ઉંદર કે પક્ષી તેને બગાડી નાખે, તેમજ ઠંડીમાં છૂંદોની ગુણવત્તા પર અસર પડી શકે.</p>
<p>છૂંદો એ અમારી કેરી લાલસા છેક દિવાળી સુધી પૂરી કરે છે, એટલે છૂંદો અત્યંત મહત્વનો છે!</p>2018-05-01T08:03:03+00:00કાર્તિકDebarshi Ray: Libre Graphics Meeting 2018https://debarshiray.wordpress.com/2018/04/30/libre-graphics-meeting-2018/
<p>
I spent the last seven days attending <a href="https://libregraphicsmeeting.org/2018/">Libre Graphics Meeting</a> in sunny and beautiful Seville. This was my second LGM, the first being <a href="https://libregraphicsmeeting.org/2012/">six years ago</a> in Vienna, so it was refreshing to be back. I stayed in one of the GIMP apartments near the <a href="https://en.wikipedia.org/wiki/La_Alameda,_Seville">Alameda de Hércules</a> garden square. Being right in the middle of the city meant that everything of interest was either within walking distance or a short bus ride away.
</p>
<p>
<img src="https://debarshiray.files.wordpress.com/2018/04/img_20180425_140117284.jpg?w=700" alt="IMG_20180425_140117284" class="alignnone size-large wp-image-2402" />
</p>
<p>
Unlike other conferences that I have been to, LGM 2018 started at six o’clock in the evening. That was good because one didn’t have to worry about waking up in time not to miss the opening keynote; and you haven’t attended LGM if you haven’t been to the State of Libre Graphics. Other than that I went to <a href="http://pippin.gimp.org/">Øyvind’s</a> presentation on colour; saw Nara describe her last ten years with <a href="http://www.estudio.gunga.com.br/">Estúdio Gunga</a>; and listened to <a href="http://understandingfonts.com/who/dave-crossland/">Dave Crossland</a> and <a href="https://twitter.com/n8willis">Nathan Willis</a> talk about fonts. There was a lot of <a href="https://en.wikipedia.org/wiki/Live_coding">live coding based music</a> and <a href="https://en.wikipedia.org/wiki/Algorave">algorave</a> going on this year. My favourite was <a href="http://www.neilcsmith.net/">Neil C. Smith’s</a> performance using <a href="https://www.praxislive.org/">Praxis LIVE</a>.
</p>
<p>
<img src="https://debarshiray.files.wordpress.com/2018/04/img_20180425_155837676.jpg?w=700" alt="IMG_20180425_155837676.jpg" class="alignnone size-large wp-image-2401" />
</p>
<p>
All said and done, the highlight of this LGM had to be the <a href="https://www.gimp.org/news/2018/04/27/gimp-2-10-0-released/">GIMP 2.10.0</a> release at the beginning of the conference. <a href="http://gegl.org/">GEGL</a> 0.4.0 was also rolled out to celebrate the occasion. Much happiness and rejoicing ensued.
</p>
<p>
I spent my time at LGM alternating between delicious tapas, strolling down the narrow and colourful alleys of Seville, sight-seeing, and hacking on GEGL. I started sketching out a proper codec API for GeglBuffer modelled on <a href="https://wiki.gnome.org/Projects/GdkPixbuf">GdkPixbuf</a>, and continued to <a href="https://bugzilla.gnome.org/show_bug.cgi?id=791837">performance</a> <a href="https://bugzilla.gnome.org/show_bug.cgi?id=795686">tune</a> babl, but those are topics for later blog posts.
</p>
<p>
<img src="https://debarshiray.files.wordpress.com/2018/04/img_20180430_1404388592.jpg?w=700" alt="IMG_20180430_140438859~2" class="alignnone size-large wp-image-2400" /></p>2018-04-30T20:33:16+00:00Debarshi RaySwaroop C H: Back to WordPresshttps://swaroopch.com/2018/04/29/back-to-wordpress/
<p>Four years ago, I migrated this blog from <a href="https://swaroopch.com/2014/04/11/migrated_to_jekyll/">WordPress to Jekyll</a>, with the intention of using whatever format I want to use inside Emacs… Subsequently, my posting rate dropped drastically to just 13 posts in 4 years!</p>
<p>I don’t think that was a coincidence. Tools matter.</p>
<p>I believe the speed and ease of writing dropped drastically. Even simple steps like using photos in a post meant using a separate tool such as Finder.app (on macOS) or command-line to move it to the right directory and then linking to it from the main post. In WordPress, that’s one drag-and-drop and done.</p>
<p>Similarly, no comments was demotivating as well. While there tends to be more nitpicking these days, I would still like to benefit from the wisdom of the crowds.</p>
<p>So now I have migrated back to WordPress. Let’s see how this goes.</p>
<p> </p>2018-04-30T00:18:10+00:00swaroopPrakash Advani: Ubuntu 18.04 LTS (Bionic Beaver) Download Linkshttp://cityblogger.com/archives/2018/04/27/ubuntu-18-04-lts-bionic-beaver-download-links
<p>Ubuntu 18.04 LTS got released yesterday. Here is the direct link to download from the India Server.</p>
<p> </p>
<table cellpadding="2" border="1">
<tbody>
<tr>
<th><strong>Ubuntu 18.04 LTS<br />
</strong></th>
<th><strong>Torrent Links</strong></th>
<th><strong>Direct Downloads</strong></th>
</tr>
<tr>
<th>Ubuntu Desktop 18.04 64-Bit</th>
<th><a href="http://mirrors.piconets.webwerks.in/ubuntu-mirror/ubuntu-releases/18.04/ubuntu-18.04-desktop-amd64.iso.torrent" target="_blank" rel="noopener">Torrent</a></th>
<th><a href="http://mirrors.piconets.webwerks.in/ubuntu-mirror/ubuntu-releases/18.04/ubuntu-18.04-desktop-amd64.iso" target="_blank" rel="noopener">Main Server</a></th>
</tr>
<tr>
<th>Ubuntu Server 18.04 64-Bit</th>
<th><a href="http://mirrors.piconets.webwerks.in/ubuntu-mirror/ubuntu-cdimage/ubuntu/releases/18.04/release/ubuntu-18.04-server-amd64.iso.torrent" target="_blank" rel="noopener">Torrent</a></th>
<th><a href="http://mirrors.piconets.webwerks.in/ubuntu-mirror/ubuntu-cdimage/ubuntu/releases/18.04/release/ubuntu-18.04-server-amd64.iso" target="_blank" rel="noopener">Main Server</a></th>
</tr>
</tbody>
</table>
<p></p><p><br />
<strong>Other releases:</strong></p>
<p><a href="http://mirrors.piconets.webwerks.in/ubuntu-mirror/ubuntu-releases/18.04/">http://mirrors.piconets.webwerks.in/ubuntu-mirror/ubuntu-releases/18.04/</a></p>
<p><a href="http://mirrors.piconets.webwerks.in/ubuntu-mirror/ubuntu-cdimage/ubuntu/releases/18.04/release/">http://mirrors.piconets.webwerks.in/ubuntu-mirror/ubuntu-cdimage/ubuntu/releases/18.04/release/</a></p>
<p>Regards</p>
<p>Prakash</p>
<p> </p>
<div class="yarpp-related-rss yarpp-related-none">
<p>No related posts.</p>
</div>2018-04-27T10:55:36+00:00PrakashKartik Mistry: વેકેશનhttps://kartikm.wordpress.com/2018/04/24/v-for-vacation-2/
<p>* વેકેશન શરૂ થઇ ગયું છે, જોકે મારું વેકેશન તો પૂરું પણ થઇ ગયું છે. વેકેશનમાં કવિનના એક ક્લાસ બંધ કરાવવામાં આવ્યા છે અને એક બીજા ક્લાસ શરૂ કરવાનો પ્લાન છે, તો પણ તેને રમવા-રખડવાનો પૂરતો સમય મળે એનું ધ્યાન રાખેલું છે. જોકે, વેકેશનમાં સૌથી મોટો ત્રાસ નવરા લોકોનો છે, જે છોકરાઓને સોસાયટીના ગ્રાઉન્ડમાં રમવા દેતા નથી. સવારે-બપોરે-સાંજે ત્રણેય સમયે તેઓ કંઇને કંઇ વાંધા-વચકા કાઢવા માટે તૈયાર જ હોય છે. મને થાય છે કે આવા નવરાઓ કેમ વિકિપીડિયા એડિટ કરતા નથી કે કેમ સાયકલ ચલાવતા નથી? શા માટે દોડવાનું શરૂ કરતા નથી કે શા માટે નજીકના પુસ્તકાલયો મુલાકાત લેતા નથી? <img src="https://s1.wp.com/wp-content/mu-plugins/wpcom-smileys/uneasy.svg" style="height: 1em;" height="16" draggable="false" width="16" alt=":/" class="wp-smiley emoji" /></p>2018-04-24T08:22:09+00:00કાર્તિકSankar P: Engeyo Partha Mayakkam - Yaaradi Nee Moginihttps://psankar.blogspot.com/2018/04/engeyo-partha-mayakkam-yaaradi-nee.html
<div style="text-align: left;" dir="ltr">கண்ணதாசன், வாலி, வைரமுத்து என்னும் தமிழ்த் திரைப்படப் பாடலாசிரியர்கள் வரிசையில், மேற்சொன்னவர்களுக்கு அடுத்த இடம் கொண்டவர் திரு நா முத்துக்குமார் அவர்கள். "ஒவ்வொரு பூக்களுமே" பாடலுக்கு இந்திய அரசின் விருது கிடைத்த போது, இவருக்கு இன்னும் கிடைக்கவில்லையே என்று வருந்தினேன். இரண்டு ஆண்டுகள் தொடர்ந்து வாங்கினார் பின்னாளில். இதனைப் பாராட்டி வெண்பாவெல்லாம் எழுதினேன் எனது முகநூலில். என்றாவது ஒரு நாள் நேரில் பார்த்தால் காட்டலாம் என்று இருந்தேன்.<br /><br /><br />இவர் பல இசையமைப்பாளர்களுடன் பணியாற்றி இருந்தாலும், யுவன்சங்கர்ராஜா உடன் பணியாற்றிய பாடல்கள், அக்காலத்தைய இளைஞர்கள் மத்தியில் மிகுந்த புகழ் பெற்றவை.<br /><br /><br />இயக்குநர் செல்வராகவன் திரைக்கதையில் ஒரு பொது அம்சம்: ஒரு உதவாக்கரை நாயகன் இருப்பான், வீட்டில் உட்பட யாரும் மதிக்க மாட்டார்கள், எங்கிருந்தோ தேவதை போல ஒரு பெண் வருவாள், நாயகன் அவளுக்காகத் திருந்தி முன்னேறுவான். "யாரடி நீ மோகினி" படத்திலும் இதே கதை அமைப்பு உண்டு. அப்போது நாயகன், நாயகியை முதல் முறை, கண்டதும் காதல் கொண்டதும், பின்னணியில் ஒலிக்கும் பாடல், நாமு வரிகளில் "<b>எங்கேயோ பார்த்த மயக்கம்</b>". அநேகமாக, பாடல் வரிகள் எழுதி விட்டு பிறகு மெட்டு அமைத்திருப்பார்கள் என்று நம்புகிறேன். எனக்கு மிகவும் பிடித்த பல வரிகள் கொண்ட பாடல், குறிப்பாக, "<b><i>இடி விழுந்த வீட்டில் இன்று பூச்செடிகள் பூக்கிறது</i></b>" என்ற வரி. இந்தப் பாடலுக்கு என் வரிகளை, அதே காட்சிக்கு (Song Situation) பொருந்துமாறு, ஏதோ எனக்கு வந்த வரை எழுதி இருக்கிறேன். மெட்டோடு இயைந்து ஒலிக்கும் சொற்கள். கூடவே பாடிப் பாருங்கள்.<br /><br />இந்தத் திரைப்படம் தெலுங்கில் செல்வராகவன் திரைக்கதை எழுதிய படத்தின் தமிழ் மொழிபெயர்ப்பு, திரு ரகுவரன் அவர்களின் இறுதிப் படமும் ஆகும்.<br /><br />பாடல் குறித்து, ஏதேனும் கருத்து இருந்தால் பின்னூட்டத்தில் (Comments) தெரிவிக்கவும். ஏதாவது வரி புரியவில்லை என்றாலும் கேட்கவும். நன்றி.<br /><br /><br />பாடலின் சுட்டி:<br /><br /><br /><br /><br /><b>இப்பாடல் மெட்டுக்கு என் வரிகள்:</b><br /><br />உன்னோடு வாழ விருப்பம்!<br />உன்நிழலில் கொண்டேன் கிறக்கம்! <br />உன்னைப் பார்த்த நாளில் இருந்தே,<br />நானும் கொண்டேன் காதல் மயக்கம்!<br />கண்களை நீ இமைக்கும்போது,<br />கூப்பிட்டாயென நம்பும் மனது.<br /><br />என் தூக்கம் தூக்கிச் சென்ற பாவை,<br />என் ஏக்கம் ஏற்றி வைத்த பூவை,<br />இனி உனது நெருக்கம் எந்தன் தேவை,<br />உனக்கு அளிப்பேன் எந்தன் வாழ்வை!<br /><br />கால்கள் முளைத்த தாமரைநீயே!<br />காற்றில் நகரும் ஓவியம்நீயே!<br /><br />---<br />விழி கண்டேன் விழியே கண்டேன்<br />வழியை மறந்து உன்னைக் கண்டேன்<br />விழிகள் வழியே நீயும் நுழைய<br />வலியைக் கொடுக்கும் காதல் கொண்டேன்<br /><br />கொண்டேன் கொண்டேன் காதல் கொண்டேன்<br />காற்றில் அலையும் காகிதம் போலே<br />கவலை இன்றி திரிந்த நானும்<br />கவிதைநூல் போல் காதல்கொண்டேன்<br /><br />என்னோடு நீ, உன்னோடு நான்<br />மெய்யோடு மெய் கலந்திட வேண்டும்<br /><br />பாடி வந்த தேவதை நீதான்<br />தேடி வந்த அடிமை நான்தான்<br />கூடி நாமும் வாழ வேண்டும்<br />ஓடிப் போவோம் உயிரே உடன்வா<br /><br />-- (உன்னோடு வாழ விருப்பம்! ...)<br /><br />இரவின் இருளைச் சிறைப்பிடித்து<br />அதை விழியில் அடைத்து வைத் தாளோ<br />நிலவின் குளுமையை எடுத்து<br />தன் குரலில் இணைத்துக் கொண்ட தேன்மொழியோ<br /><br />சிறகொடிந்த பறவை ஒன்று<br />சிலிர்த்துக் கொண்டு எழுகிறது<br />விரலிடுக்கில் உலகைத் தூக்கிப்<br />பறந்து செல்ல விரைகிறது<br /><br />பகலும் இரவும் உன்னை நினைத்து<br />பசியும் ருசியும் மறந்து இளைத்து<br />நினைவைப் பிழிந்து கவிதை வடித்து<br />மனதை எடுத்து உனக்குக் கொடுத்து ...<br /><br />... உன்னோடு வாழ விருப்பம்!<br /><br /><br />பிகு 1: ஏற்கனவே ஒரு முறை வேறொரு பாடலுக்கும் இப்படி ஒரு <a href="https://psankar.blogspot.in/2012/11/nenjukulle-kadal.html" target="_blank">முயற்சி</a> எடுத்திருக்கிறேன்.<br />பிகு 2: இந்த பாடல் வரிகளை நீங்கள் பாடுவதற்கோ, வேறு வியாபார நோக்கங்களுக்கோ வேண்டுமெனில் தாராளமாய் பயன்படுத்தவும். எனக்கு எந்த சன்மானமும் இதற்காக வேண்டாம் :) ஆனால் ஒரு அஞ்சல் அனுப்பி விட்டீர்கள் என்றால் மகிழ்வேன். இது CreativeCommons Zero License அடிப்படையில் உலகுக்கு அளிக்கப்படுகிறது. </div>2018-04-21T16:38:50+00:00Sankar PDebarshi Ray: GNOME Terminal 3.28.x lands in Fedorahttps://debarshiray.wordpress.com/2018/04/16/gnome-terminal-3-28-x-lands-in-fedora/
<p>
<i>The following screenshots don’t have the correct colours. Their colour channels got inverted because of <a href="https://gitlab.gnome.org/GNOME/mutter/issues/72">this bug</a>.</i>
</p>
<p>
Brave testers of <a href="https://getfedora.org/en/workstation/prerelease/">pre-release Fedora</a> builds might have noticed the absence of updates to <a href="https://wiki.gnome.org/Apps/Terminal/">GNOME Terminal</a> and <a href="https://wiki.gnome.org/Apps/Terminal/VTE">VTE</a> during the Fedora 28 development cycle. That’s no longer the case. <a href="https://blogs.gnome.org/kalev/">Kalev</a> submitted gnome-terminal-3.28.1 as part of the larger GNOME 3.28.1 <a href="https://bodhi.fedoraproject.org/updates/FEDORA-2018-e67e16187d">mega-update</a>, and it will make its way into the repositories in time for the Fedora 28 release early next month.
</p>
<p>
The recent lull in the default Fedora Workstation terminal was not due to the lack of development effort, though. The recent GNOME 3.28 release had a relatively large number of changes in both GNOME Terminal and VTE, and it took some time to update the <a href="https://github.com/debarshiray/gnome-terminal">Fedora-specific</a> <a href="https://github.com/debarshiray/vte">patches</a> to work with the new upstream version.
</p>
<p>
Here are some highlights from the past six months.
</p>
<p><strong>Unified preferences dialog</strong></p>
<p>
The global and profile preferences were merged into a single preferences dialog. I am very fond of this unified dialog because I have a hard time remembering whether a setting is global or not.
</p>
<p>
<img src="https://debarshiray.files.wordpress.com/2018/04/gnome-terminal-3-28-preferences.png?w=700" alt="gnome-terminal-3.28-preferences" class="alignnone size-large wp-image-2394" />
</p>
<p><strong>Text settings</strong></p>
<p>
The profile-specific settings UI has seen some changes. The bulk of these are in the “Text” tab, which used to be known as “General” in the past.
</p>
<p>
It’s now possible to <a href="https://bugzilla.gnome.org/show_bug.cgi?id=791968">adjust the vertical and horizontal spacing</a> between the characters rendered by the terminal for the benefit of those with visual impairments. The <a href="https://bugzilla.gnome.org/show_bug.cgi?id=559990">blinking of the cursor</a> can be more easily tweaked because the setting is now exposed in the UI. Some people are distracted by a prominently flashing cursor block in the terminal, but still want their thin cursors to flash elsewhere for the sake of discoverability. This should help with that.
</p>
<p>
<img src="https://debarshiray.files.wordpress.com/2018/04/gnome-terminal-3-28-preferences-text.png?w=700" alt="gnome-terminal-3.28-preferences-text" class="alignnone size-large wp-image-2395" />
</p>
<p>
Last but not the least, it’s nice to see the profile ID occupy a less prominent position in the UI.
</p>
<p><strong>Colours and bold text</strong></p>
<p>
There are <a href="https://bugzilla.gnome.org/show_bug.cgi?id=793152">some</a> <a href="https://bugzilla.gnome.org/show_bug.cgi?id=762247">subtle</a> <a href="https://bugzilla.gnome.org/show_bug.cgi?id=722751">improvements</a> to the foreground colour selection for bold text. As a result, the “allow bold text” setting has been deprecated and replaced with “show bold text in bright colors” in the “Colors” tab. Various inconsistencies in the <a href="https://bugzilla.gnome.org/show_bug.cgi?id=774619">Tango palette</a> were also resolved.
</p>
<p><strong>Port to GAction and GMenu</strong></p>
<p>
The most significant non-UI change was <a href="https://bugzilla.gnome.org/show_bug.cgi?id=745329">the port</a> to <a href="https://developer.gnome.org/gio/stable/GAction.html">GAction</a> and <a href="https://developer.gnome.org/gio/stable/GMenuModel.html">GMenuModel</a>. GNOME Terminal no longer uses the deprecated GtkAction and GtkUIManager classes.
</p>
<p><strong>Blinking text</strong></p>
<p>
VTE now supports <a href="https://bugzilla.gnome.org/show_bug.cgi?id=579964">blinking text</a>. Try this:<br />
<code></code></p>
<pre> $ tput blink; echo "blinking text"; tput sgr0
</pre>
<p><br />
If you don’t like it, then there’s a setting to turn it off.
</p>
<p><strong>Overline and undercurl</strong></p>
<p>
Similar to underline and strikethrough, VTE now supports <a href="https://bugzilla.gnome.org/show_bug.cgi?id=767115">overline</a> and <a href="https://bugzilla.gnome.org/show_bug.cgi?id=721761">undercurl</a>. These can be interesting for spell checkers and software development tools.</p>2018-04-16T14:18:23+00:00Debarshi RaySanthosh Thottingal: u and uː vowel signs of Malayalamhttps://thottingal.in/blog/2018/04/15/vowel-signs-malayalam-orthography/
<p>The reformed or simplified orthographic script style of Malayalam was introduced in 1971 by this <a href="http://unicode.org/L2/L2008/08039-kerala-order.pdf">government order</a>. This is what is taught in schools. The text book content is also in reformed style. The prevailing academic situation does not facilitate the students to learn the exhaustive and rich orthographic set of Malayalam script. At the same time they observe a lot of wall writings, graffiti, bill-boards and handwriting sticking to the exhaustive orthographic set.</p>
<p>The sign marks for the vowels ഉ and ഊ (<em>u</em> and <em>uː</em>) have many diverse forms in the exhaustive orthographic set when joined with different consonants. But in the reformed style they are always detached from the base consonant with a unique form as ു and ൂ respectively for the vowel sounds <em>u</em> and <em>uː</em>. Everyone learns to read both of these orthographic variants either from the school or from everyday observations. But while writing the styles, they often gets mixed up as seen below.</p>
<figure style="width: 678px;" class="wp-caption aligncenter" id="attachment_1308"><a href="http://thottingal.in/blog/wp-content/uploads/2018/01/u-signs-mixup2.jpg"><img src="http://thottingal.in/blog/wp-content/uploads/2018/01/u-signs-mixup2-1024x435.jpg" alt="" width="678" class=" wp-image-1308" height="288" /></a>u sign forms on wall writings</figure>
<div class="mceTemp">The green mark indicates the usage of reformed orthography to write പു (<em>pu</em>), blue indicates the usage of exhaustive set orthography to write ക്കു (<em>kku</em>). But the one in red is an unusual usage of exhaustive orthography to write ത്തു (ththu). Such usages are commonplace now, mainly due to the lack of academic training as I see it.</div>
<p> </p>
<figure style="width: 606px;" class="wp-caption aligncenter" id="attachment_1295"><a href="http://thottingal.in/blog/wp-content/uploads/2018/01/u-signs-overuse-e1515511461606.jpg"><img src="http://thottingal.in/blog/wp-content/uploads/2018/01/u-signs-overuse-e1515511461606-981x1024.jpg" alt="" width="606" class="wp-image-1295 " height="632" /></a>Redundant usage of vowel sign of u is indicated in circle</figure>
<p>In this blog post I try to consolidate the vowel signs of u and <em>uː</em> referring to early script learning resources for Malayalam.</p>
<h3>Vowel signs in Malayalam</h3>
<p>There are 37 consonants and 15 vowels in Malayalam (additionally there are less popular consonant vowels like ൠ, ഌ and ൡ). Vowels have independent existence only at word beginnings. Otherwise they appear as consonant sound modifiers, in form of vowel signs. These signs often modify the glyph shape of consonants and this is a reason for the complex nature of Malayalam script. These marks can get distributed over the left and right of the base consonant. See the table below:</p>
<p><a href="https://thottingal.in/blog/wp-content/uploads/2018/04/usigns-eng.png"><img src="https://thottingal.in/blog/wp-content/uploads/2018/04/usigns-eng-1024x819.png" alt="" width="739" class="aligncenter wp-image-1430 size-large" height="591" /></a>As seen in the table, the signs <strong> ു, ൂ, ൃ</strong> ([u] ,[uː], [rɨ] ) changes the shape of the base consonant grapheme. It was not until the 1971 orthographic reformation these signs got detached from the base grapheme. You can see the detached form as well in the rows 5,6 and 7 of the above table.</p>
<h3><strong>How does the vowel sign for ‘ു’ [u] and ‘ൂ’ [uː] affect the base consonant?<br />
</strong></h3>
<p>In the exhaustive script set of Malayalam there are in fact 8 ways in which ‘ു’ [u] and ‘ൂ’ [uː] sign marks change the shape of base consonant grapheme. These 8 forms (u- 4 forms and uː – 4 forms) are consolidated below.</p>
<p><a href="https://thottingal.in/blog/wp-content/uploads/2018/04/ku1-eng-1.png"><img src="https://thottingal.in/blog/wp-content/uploads/2018/04/ku1-eng-1-615x1024.png" alt="" width="615" class="aligncenter wp-image-1421 size-large" height="1024" /></a></p>
<p>‘ു’ [u] sign induces 4 types of shape variations to base consonant.</p>
<ul>
<li><strong>ക(ka) , ര(ra) </strong>gets modified by a shape we hereby call as <em><strong>hook. </strong></em>The same shape change applies to all conjuncts that ends with ക as in <strong>ങ്ക(n̄ka), ക്ക(kka), സ്ക(ska)</strong> and<strong> സ്ക്ക(skka). </strong>As the conjuncts that ends with <strong>ര(ra)</strong> assumes a special shape the <em><strong>hook</strong></em> shaped sign does not apply to them.</li>
<li><strong>ഗ(ga), ഛ(ʧʰa), ജ(ʤa), ത(t̪a), ഭ(bʱa), ശ(ʃa), ഹ(ɦa) </strong>gets modified by a shape that resembles a <em><strong>tail </strong></em>that comes back to right after moving left. Those conjuncts which end with these consonants also assume the same <em><strong>tail </strong></em> shape when <em> when ‘ു’ [u] </em>vowel sign appear after them.</li>
<li><strong>ണ(ɳa) </strong>and<strong> ന(na/n̪a)</strong> changes their shape with an inward <em><strong>closed</strong></em> <em><strong>loop</strong></em>. Those conjuncts which end with these consonants also assume the same <em><strong>loop </strong></em> shape when <em> when ‘ു’ [u] </em>vowel sign appear after them. For example <strong>ണ്ണ(ɳɳa), ന്ന(nna), ക്ന(kna)</strong> etc.</li>
<li>All other 24 consonants use the <em><strong>drop</strong></em> shape. As it is the most popular among all <em>[u]</em> signs, it is often mistakenly used instead of the other signs mentioned above. This case is indicated in the red circle in figure captioned <a href="http://thottingal.in/blog/wp-content/uploads/2018/01/u-signs-mixup2.jpg">u sign forms on wall writings</a>.</li>
</ul>
<p><a href="https://thottingal.in/blog/wp-content/uploads/2018/04/ku2-eng.png"><img src="https://thottingal.in/blog/wp-content/uploads/2018/04/ku2-eng-1024x910.png" alt="" width="739" class="aligncenter wp-image-1422 size-large" height="657" /></a></p>
<p>‘ൂ’ [uː] sign induces 4 types of shape variations to base consonants.</p>
<ul>
<li><strong>ക(ka) , ര(ra), </strong><strong>ഗ(ga), ഛ(ʧʰa), ജ(ʤa), ത(t̪a), ഭ(bʱa), ശ(ʃa), ഹ(ɦa)</strong> can have two alternate uː sign forms. First shape is <strong><em>hook and tail</em> </strong>shape while the second one is <em><strong>hook and rounded tail</strong></em>.
<ul>
<li><strong> <em>H</em></strong><em><strong><em>ook</em> and rounded tail </strong></em>is more popular with the consonants <strong>ക(ka) , ര(ra) </strong>and <strong>ഭ(bʱa) </strong></li>
<li><strong> H<em>ook and tail</em> </strong> is more popular with the consonants <strong>ഗ(ga)</strong>,<strong> ഛ(ʧʰa), ജ(ʤa), ത(t̪a), ശ(ʃa)</strong> and <strong>ഹ(ɦa)</strong></li>
</ul>
</li>
<li>The outward<em><strong> open loop </strong></em>shape is assumed by the ‘ൂ’ [uː] sign mark when associated with the consonants <strong>ണ(ɳa) </strong>and<strong> ന(na/n̪a)</strong></li>
<li>All other 24 consonants use the <em><strong>double-drop</strong></em> shape. As it is the most popular among all <em>[u]</em> signs, it is often mistakenly used instead of the other signs mentioned above</li>
</ul>
<p style="text-align: left;">Note: The sign shape names <em><strong>drop, double-drop, hook, hook and tail, hook and rounded tail, tail, closed loop and open loop</strong></em> are author’s own choice. Hence there is no citations to literature.</p>
<h3>Early texts on Malayalam script and orthography</h3>
<p>Modern textbooks do not detail the ‘ു’ [u] and ‘ൂ’ [uː] vowel sign forms. The earliest available reference to the script of Malayalam and its usage is the book from 1772, <a href="https://en.wikipedia.org/wiki/Alphabetum_grandonico-malabaricum_sive_samscrudonicum"><i>Alphabetum grandonico-malabaricum sive samscrudonicum.</i></a>It was a text book meant to be used by western missionaries to Kerala to learn the Malayalam script and its language of description is Latin. <a href="https://archive.org/stream/1772AlphabetumGrandonicoMalabaricum/1772_Alphabetum_Grandonico_Malabaricum#page/n0/mode/2up"><i><b>Alphabetum</b></i></a> describes various vowel sign forms but it does not give any indication on the <em><strong>hook and tail</strong></em> form. <strong>ക(ka) , ര(ra), </strong><strong>ഗ(ga), ഛ(ʧʰa), ജ(ʤa), ത(t̪a), ഭ(bʱa), ശ(ʃa), ഹ(ɦa) </strong>etc. uses the <em><strong>hook and rounded tail </strong></em>form only. This being the first ever compilation of Malayalam script usage, that too by a non-native linguist, there are chances for unintended omissions about which I am not sure of.</p>
<p>The metal types used in this book were movable, and were the first of its kind to be used to print a Malayalam book. The same types were used to print the first ever complete book in Malayalam script <em>– <a href="https://archive.org/stream/Samkshepavedartham_1772/samkshepavedartham_1772">Samkshepavedartham</a>.</em></p>
<p><figure style="width: 739px;" class="wp-caption aligncenter" id="attachment_1312"><a href="http://thottingal.in/blog/wp-content/uploads/2018/01/clementu1.png"><img src="http://thottingal.in/blog/wp-content/uploads/2018/01/clementu1-1024x659.png" alt="" width="739" class="size-large wp-image-1312" height="476" /></a>E<span>xcerpt</span> from <i><b>Alphabetum grandonico-malabaricum sive samscrudonicum</b></i> describing the usage of ‘ു’ [u] and ‘ൂ’ [uː] signs</figure>A still later work in this regard was done by Rev. George Mathan, almost a century later to <i><b>Alphabetum</b></i>. He introduces <strong>drop</strong>/<strong>double drop</strong> for ‘ു’ [u]/ ‘ൂ’ [uː] as the common sign form and all others shapes are indicated as exceptions. He clearly mentions about the two alternate forms of <strong><em>hook and tail</em> </strong>as well as <em><strong>hook and rounded tail</strong></em> in his book on the <em><strong><a href="https://archive.org/stream/Malayanmayude_vyakaranam_1863#page/n3/mode/2up">Grammar of Malayalam</a>.</strong></em><img src="http://thottingal.in/blog/wp-content/uploads/2018/01/mathanSymbols-1024x638.png" alt="" width="739" class="wp-image-1298 size-large" height="460" /><em><strong>Grammar of Malayalam- George Mathan</strong></em></p>
<figure style="width: 739px;" class="wp-caption alignright" id="attachment_1299"><img src="http://thottingal.in/blog/wp-content/uploads/2018/01/mathanSymbolsex-1024x632.png" alt="" width="739" class="size-large wp-image-1299" height="456" /><em><strong>Grammar of Malayalam- George Mathan</strong></em></figure>
<h3>Contemporary usage of orthographic styles</h3>
<p>The early attempts to describe the script of Malayalam with all its complexity is seen in these books in the initial days of printing era. Much later, in 1971 reformed script orthography was introduced to the language and culture aiming at overcoming the technological limitation of Malayalam typewriters. But the language users never abandoned the then existing style variants. Now we see around us a mix of all these styles.</p>
<p>Note: This is a translation of an earlier blog post written in <a href="https://thottingal.in/blog/2018/01/12/u-vowelsigns-in-malayalam/">Malayalam</a></p>2018-04-15T16:03:40+00:00Kavya ManoharKartik Mistry: ૩ ફિલ્મોhttps://kartikm.wordpress.com/2018/04/14/3-movies/
<p>* <a href="http://www.imdb.com/title/tt5027774/" target="_blank" rel="noopener">થ્રી બિલબોર્ડ્સ આઉટસાઇડ ઇબિંગ, મિસૂરી</a> (૨૦૧૮)</p>
<p>* <a href="http://www.imdb.com/title/tt0974015/" target="_blank" rel="noopener">જસ્ટિસ લીગ</a> (૨૦૧૭)</p>
<p>* <a href="http://www.imdb.com/title/tt1856101/" target="_blank" rel="noopener">બ્લેડ રનર ૨૦૪૯</a> (૨૦૧૭)</p>
<p>આ છે, છેલ્લાં ૩ મહિનામાં જોયેલી ફિલમો. આ સિવાય <em>પાસપોર્ટ</em> ગુજરાતી ફિલમ બે દિવસ પહેલાં યુટ્યુબ પર મળી ગઇ, એકંદરે ઠીક કહેવાય. બ્લેડ રનર અને થ્રી બિલબોર્ડ્સ.. જોયા પછી જે ઝણઝણાટી થાય એવું બહુ ઓછી ફિલમોમાં થાય છે. જસ્ટિસ લીગ પણ ધાર્યા કરતા તો સરસ નીકળી છે. હવે કદાચ એવેન્જર્સ ૨૭ એપ્રિલે જોવા જઇશું એવો પ્લાન છે. <em>રેવા</em>નું ટ્રેલર જોયા પછી લાગે છે, એ ફિલમ સ્કિપ થશે. રતનપુર બાકી છે, અને ટ્રેલર પરથી સારી લાગી છે, એટલે જોવાનો ક્યાંકથી પ્રબંધ કરવો પડશે.</p>2018-04-14T06:55:37+00:00કાર્તિકKushal Das: Latest attempt to censor Internet and curb press freedom in Indiahttps://kushaldas.in/posts/latest-attempt-to-censor-internet-and-curb-press-freedom-in-india.html
<p><img src="https://kushaldas.in/images/censor.jpg" alt="" /></p>
<p>A branch of the Indian government, the Ministry of Information and
Broadcasting, is trying once again to censor Internet and Freedom of Speech.
This time, it <a href="https://www.medianama.com/2018/04/223-ministry-of-information-broadcasting-attempts-to-regulate-online-media/">ordered to form a
committee</a>
of 10 members who will frame regulations for online media/ news portals and
online content.</p>
<p>This order includes these following <em>Terms of Reference</em> for the committee.</p>
<ul>
<li>To delineate the sphere of online information dissemination which needs to be brought under regulation, on the lines applicable to print and electronic media.</li>
<li>To recommend appropriate policy formulation for online media / news portals and online content platforms including digital broadcasting which encompasses entertainment / infotainment and news/media aggregators keeping in mind the extant FDI norms, Programme &amp; Advertising Code for TV Channels, norms circulated by PCI, code of ethics framed by NBA and norms prescribed by IBF; and</li>
<li>To analyze the international scenario on such existing regulatory mechanisms with a view to incorporate the best practices.</li>
</ul>
<h3>What are the immediate problems posed by this order?</h3>
<p>If one reads carefully, one can see how vague are the terms, and specifically
how they added the term <em>online content</em> into it.</p>
<p><em>online content</em> means everything we can see/read/listen do over cyberspace. In
the last few years, a number of new news organizations came up in India, whose
fearless reporting have caused a lot of problems for the government and their
friends. Even though they managed to censor publishing (sometimes self
censored) news in the mainstream Indian media, but all of these new online
media houses and individual bloggers and security researchers and activists
kept informing the mass about the wrongdoings of the people in power.</p>
<p>With this latest attempt to restrict free speech over the internet, the
government is trying to increase its reach even more. Broad terms like <em>online
content platforms</em> or <em>online media</em> or <em>news/media aggregators</em> will include
every person and websites under its watch. One of the impacts of mass
indiscriminate surveillance like this is that people are shamed into reading
and thinking only what is in line with the government, or popular thought .</p>
<p>How do you determine if some blog post or update in a social media platform is
news or not? For me, most of things I read on the internet are news to me. I
learn, I communicate my thoughts over these various platforms on cyberspace. To
all those computer people reading this blog post, think about the moment when
you will try to search about “how to do X in Y programming language?” on
Internet, but, you can not see the result because that is blocked by this
censorship.</p>
<p>India is also known for <a href="https://en.wikipedia.org/wiki/Internet_censorship_in_India">random
blockades</a> of
different sites over the years. The Government also ordered to kill Internet
for entire states for many days. For the majority of internet blockages, we,
the citizens of India were neither informed the reasons nor given a chance to
question the legality of those bans. India has been marked as a<em>country under
surveillance</em> by <a href="http://march12.rsf.org/i/Report_EnemiesoftheInternet_2012.pdf">Reporters Without
Borders</a> back in
2012.</p>
<p>Also remember that this is the same Government, which was trying to fight at
its best in the Supreme Court of India last year, to curb the privacy of every
Indian citizen. They said that Indian citizens do not have any right to
privacy. Thankfully the bench
<a href="https://www.eff.org/deeplinks/2017/08/indias-supreme-court-upholds-right-privacy-fundamental-right-and-its-about-time">declared</a>
the following:</p>
<blockquote>
<p>The right to privacy is protected as an intrinsic part of the right to life
and personal liberty under Article 21 and as a part of the freedoms guaranteed
by Part III of the Constitution.</p>
</blockquote>
<p>Privacy is a fundamental right of every Indian citizen.</p>
<p>However, that fundamental right is still under attack in the name of another
draconian law <em>The Aadhaar act</em>. A case is currently going on in the Supreme
Court of India to determine the constitutional validity of Aadhaar. In the
recent past, when journalists reported how the Aadhaar data can be breached,
instead of fixing the problems, the government is <a href="https://freedom.press/news/indian-government-faced-massive-data-breach-targets-journalists/">criminally investigating the
journalists</a>.</p>
<h3>A Declaration of the Independence of Cyberspace</h3>
<p>Different governments across the world kept trying (and they will keep trying
again and again) to curb free speech and press freedom. They are trying to draw
borders and boundaries inside of cyberspace, and restrict the true nature of
what is it referring to here?.</p>
<p>In 1996, late John Perry Barlow wrote <a href="https://www.eff.org/cyberspace-independence">A Declaration of the Independence of
Cyberspace</a>, and I think that fits
in naturally in the current discussion.</p>
<blockquote>
<p>Governments of the Industrial World, you weary giants of flesh and steel, I
come from Cyberspace, the new home of Mind. On behalf of the future, I ask you
of the past to leave us alone. You are not welcome among us. You have no
sovereignty where we gather. -- John Perry Barlow</p>
</blockquote>
<h3>How can you help to fight back censorship?</h3>
<p>Each and every one of us are affected by this, and we all can help to fight
back and resist censorship. The simplest thing you can do is start talking
about the problems. Discuss them with your neighbor, talk about it while
commuting to the office. Explain the problem to your children or to your
parents. Write about it, write blog posts, share across all the different
social media platforms. Many of your friends (from other fields than computer
technology) may be using Internet daily, but might not know about the
destruction these laws can cause and the censorship imposed on the citizens of
India.</p>
<p>Educate people, learn from others about the problems arising. If you are giving
a talk about a FOSS technology, also talk about how a free and open Internet is
helping all of us to stay connected. If that freedom goes away, we will lose
everything. At any programming workshop you attend, share these knowledge with
other participants.</p>
<p>In many cases, using tools to bypass censorship altogether is also very helpful
(avoiding any direct confrontation). <a href="https://www.torproject.org/">The Tor
Project</a> is a free software and open network which
helps to keep freedom and privacy of the users. By circumventing surveillance
and censorship, one can use it more for daily Internet browsing. The increase
in Tor traffic will help all of the Tor network users together. This makes any
attempt of tracking individuals even more expensive for any nation state
actors. So, <a href="https://www.torproject.org/projects/torbrowser.html.en">download the Tor
Browser</a> today and
start using it for everything.</p>
<p>In this era of <em>Public private partnership from hell</em>, Cory Doctorow
beautifully <a href="https://youtu.be/Oaci9vlg_Sc?t=14509">explained</a> how internet is
the nervous system of 21st century, and how we all can join together to save
the freedom of internet. Listen to him, do your part.</p>
<p>Header image copyright: <a href="https://www.flickr.com/photos/23505652@N03/6721289089/">Peter Massas</a> (CC-BY-SA)</p>2018-04-13T04:01:00+00:00Kushal DasKushal Das: dgplug summer training 2018https://kushaldas.in/posts/dgplug-summer-training-2018.html
<p><img src="https://kushaldas.in/images/dgplug_screenshot.png" alt="" /></p>
<p><a href="https://dgplug.org/summertraining18/">dgplug summer training 2018</a> will start
at 13:30 UTC, 17th June. This will be the 11th edition. Like every year, we
have modified the training based on the feedback and, of course, there will be
more experiments to try and make it better.</p>
<h3>What happened differently in 2017?</h3>
<p>We did not manage to get all the guest sessions mentioned, but, we moved the
guest sessions at the later stage of the training. This ensured that only the
really interested people were attending, so there was a better chance of having
an actual conversation during the sessions. As we received mostly positive
feedback on that, we are going to do the same this year.</p>
<p>We had much more discussions among the participants in general than in previous
years. <a href="https://anweshadas.in">Anwesha</a> and I wrote an
<a href="https://kushaldas.in/pages/hacker-ethic-and-free-software-movement.html">article</a>
about the history of the Free Software and we had a lot of discussion about the
political motivation and freedom in general during the training.</p>
<p>We also had an amazing detailed session on Aadhaar and how it is affecting
(read destroying) India, by <a href="https://twitter.com/jackerhack">Kiran
Jonnalagadda</a>.</p>
<p>Beside, we started writing <a href="https://lym.readthedocs.io/en/latest/">a new book</a>
to introduce the participants to Linux command line. We tried to cover the
basics of Linux command line and the tools we use on a day to day basis.</p>
<p><a href="http://www.shakthimaan.com/">Shakthi Kannan</a> started <a href="https://github.com/dgplug/operation-blue-moon">Operation Blue
Moon</a> where he is helping
individuals to get things done by managing their own sprints. All information
on this project can be found in the aforementioned Github link.</p>
<h3>What are the new plans in 2018?</h3>
<p>We are living in an era of surveillance and the people in power are trying to
hide facts from the people who are being governed. There are a number of Free
Software projects which are helping the citizens of cyberspace to resist and
bypass the blockades. This year we will focus on these applications and how one
can start contributing to the same projects in upstream. A special focus will
be given to <a href="https://www.torproject.org">The Tor project</a>, both from users’ and
developers’ point of views.</p>
<p>In 2017, a lot of people asked help to start learning Go. So, this year we will
do a basic introduction to Go in the training. Though, Python will remain the
primary choice for teaching.</p>
<h3>How to join the training?</h3>
<p>First, join our <a href="http://lists.dgplug.org/listinfo.cgi/users-dgplug.org">mailing list</a>, and then join the IRC channel #dgplug on
Freenode.</p>2018-04-12T12:19:00+00:00Kushal DasNirbheek Chauhan: A simple method of measuring audio latencyhttp://blog.nirbheek.in/2018/04/a-simple-method-of-measuring-audio.html
<div style="text-align: left;" dir="ltr">In my <a href="http://blog.nirbheek.in/2018/03/low-latency-audio-on-windows-with.html">previous blog post</a>, I talked about how I improved the latency of GStreamer's default audio capture and render elements on Windows.<br /><br />An important part of any such work is a way to accurately measure the latencies in your audio path.<br /><br />Ideally, one would use a mechanism that can track your buffers and give you a detailed breakdown of how much latency each component of your system adds. For instance, with an audio pipeline like this:<br /><br />audio-capture → filter1 → filter2 → filter3 → audio-output<br /><br />If you use GStreamer, you can use the <a href="https://gstreamer.freedesktop.org/documentation/design/tracing.html#print-processing-latencies">latency tracer</a> to measure how much latency filter1 adds, filter2 adds, and so on.<br /><br />However, sometimes you need to measure latencies added by components <i>outside</i> of your control, for instance the audio APIs provided by the operating system, the audio drivers, or even the hardware itself. In that case it's really difficult, bordering on impossible, to do an automated breakdown.<br /><br />But we do need some way of measuring those latencies, and I needed that for the aforementioned work. Maybe we can get an aggregated (total) number?<br /><br />There's a simple way to do that if we can create a loopback connection in the audio setup. What's a <i>loopback</i> you ask?<br /><br /><div style="text-align: center;"><img src="https://upload.wikimedia.org/wikipedia/commons/c/c8/Ouroboros-simple.svg" alt="Ouroboros snake biting its tail" border="0" width="40%" /></div><br />Essentially, if we can redirect the audio output back to the audio input, that's called a loopback. The simplest way to do this is to connect the speaker-out/line-out to the microphone-in/line-in with a two-sided 3.5mm jack.<br /><br /><div style="clear: both; text-align: center;" class="separator"><a style="margin-left: 1em; margin-right: 1em;" href="http://4.bp.blogspot.com/-qn_LfimYsiY/Wrn_akYmgGI/AAAAAAAACGk/H4Tm1KfMGtE6-9Fhwk3k0W6plIMAqF65QCK4BGAYYCw/s1600/photo_2018-03-27_13-52-08.jpg"><img src="https://4.bp.blogspot.com/-qn_LfimYsiY/Wrn_akYmgGI/AAAAAAAACGk/H4Tm1KfMGtE6-9Fhwk3k0W6plIMAqF65QCK4BGAYYCw/s400/photo_2018-03-27_13-52-08.jpg" alt="photo of male-to-male 3.5mm jack connecting speaker-out to mic-in" border="0" width="400" height="300" /></a></div><br />Now, when we send an audio wave down to the audio output, it'll show up on the audio input.<br /><br />Hmm, what if we store the <a href="https://developer.gnome.org/glib/stable/glib-Date-and-Time-Functions.html#g-get-monotonic-time">current time</a> when we send the wave out, and compare it with the current time when we get it back? Well, that's the total end-to-end latency!<br /><br />If we send out a wave periodically, we can measure the latency continuously, even as things are switched around or the pipeline is dynamically reconfigured.<br /><br />Some of you may notice that this is somewhat similar to how the `ping` command measures latencies across the Internet.<br /><br /><div style="clear: both; text-align: center;" class="separator"><a href="http://1.bp.blogspot.com/-pTpGe6rVoIs/WrZBZhJKgOI/AAAAAAAACGA/gMvhESNqozAD4DJzXMBD8eeTcG0FfGqywCK4BGAYYCw/s1600/ping.png"><img src="https://1.bp.blogspot.com/-pTpGe6rVoIs/WrZBZhJKgOI/AAAAAAAACGA/gMvhESNqozAD4DJzXMBD8eeTcG0FfGqywCK4BGAYYCw/s1600/ping.png" alt="screenshot of ping to 192.168.1.1" border="0" /></a></div><br /><br />Just like a network connection, the loopback connection can be lossy or noisy, f.ex. if you use loudspeakers and a microphone instead of a wire, or if you have (ugh) noise in your line. But unlike network packets, we lose all context once the waves leave our pipeline and we have no way of uniquely identifying each wave.<br /><br />So the simplest reliable implementation is to have only one wave traveling down the pipeline at a time. If we send a wave out, say, once a second, we can wait about one second for it to show up, and otherwise presume that it was lost.<br /><br />That's exactly how the <a href="https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad/html/gst-plugins-bad-plugins-audiolatency.html">audiolatency GStreamer plugin</a> that I wrote works! Here you can see its output while measuring the combined latency of the <a href="http://blog.nirbheek.in/2018/03/low-latency-audio-on-windows-with.html">WASAPI source and sink elements</a>:<br /><br /><div style="clear: both; text-align: center;" class="separator"><a style="margin-left: 1em; margin-right: 1em;" href="http://1.bp.blogspot.com/-9j3Hs5bzz_M/WsPOvM-bkKI/AAAAAAAACHA/go0r1u8AX-847lId2whNuiwPsyYrZwYFgCK4BGAYYCw/s1600/wasapi-latency.png"><img src="https://1.bp.blogspot.com/-9j3Hs5bzz_M/WsPOvM-bkKI/AAAAAAAACHA/go0r1u8AX-847lId2whNuiwPsyYrZwYFgCK4BGAYYCw/s1600/wasapi-latency.png" border="0" /></a></div><br />The first measurement will always be wrong because of various implementation details in the audio stack, but the next measurements should all be correct.<br /><br />This mechanism does place an upper bound on the latency that we can measure, and on how often we can measure it, but it should be possible to take more frequent measurements by sending a new wave as soon as the previous one was received (with a 1 second timeout). So this is an enhancement that can be done if people need this feature.<br /><br />Hope you find the element useful; <a href="https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad/html/gst-plugins-bad-plugins-audiolatency.html#gst-plugins-bad-plugins-audiolatency.description">go forth and measure</a>!</div>2018-04-11T08:43:04+00:00NirbheekKushal Das: Remembering John Perry Barlowhttps://kushaldas.in/posts/remembering-john-perry-barlow.html
<p><img src="https://kushaldas.in/images/jpb/barlow4-og.png" alt="" /></p>
<blockquote>
<p>I dream of a day, and it is not a crazy dream, when everybody on this planet
who wants to know all about that is presently known about something, will be
able to do so regardless of where he or she is. And and I dream of a day where
the right to know is understood as a natural human right, that extends to every
being on the planet who is governed by anything. The right to know what it’s
government is doing and how and why. -- John Perry Barlow</p>
</blockquote>
<p><img src="https://kushaldas.in/images/jpb/jpb_pycon.jpg" alt="" /></p>
<p>I met <a href="https://en.wikipedia.org/wiki/John_Perry_Barlow">John Perry Barlow</a> only
once in my life, during his PyCon US 2014 keynote. I remember trying my best
to stay calm as I walked towards him to start a conversation. After some time,
he went up on the stage and started speaking. Even though I spoke with him
very briefly, I still felt like I knew him for a long time.</p>
<p>This Saturday, April 7th, <a href="https://eff.org">Electronic Frontier Foundation</a> and
<a href="https://freedom.press">Freedom of the Press Foundation</a> organized the <a href="https://supporters.eff.org/civicrm/event/info?reset=1&amp;id=191">John
Perry Barlow
Symposium</a> at the
<a href="https://archive.org">Internet Archive</a> to celebrate the life and leadership of
John Perry Barlow, or JPB as he was known to many of his friends and followers.</p>
<p>The event started around 2:30AM IST, and Anwesha and /me woke up at right time
to attend the whole event. Farhaan and Saptak also took part in watching the
event live.</p>
<p>Cory Doctorow was set to open the event but was late due to closing down of SFO
runways (he later mentioned that he was stuck for more than 5 hours). In his
stead, Cindy Cohn, Executive Director of the Electronic Frontier Foundation,
started the event. There were two main panel sessions, with 4 speakers in each,
and everyone spoke about how Barlow inspired them, or about Internet freedom,
and took questions after. But, before those sessions began, Ana Barlow spoke
about her dad, and about how many people from different geographies were
connected to JPB, and how he touched so many people’s lives.</p>
<p><img src="https://kushaldas.in/images/jpb/first_panel.png" alt="" /></p>
<p>The first panel had Mitch Kapor, Pam Samuelson, Trevor Timm on the stage. Mitch
started talking with JPB’s writing from 1990s and how he saw the future of
Internet. He also reminded us that most of the stories JPB told us, were
literally true :D. He reminded us even though EFF started as a civil liberties
organization, but how Wall Street Journal characterized EFF as a <em>hacker
defense fund</em>. Pam Samuelson spoke next starting with a quote from JPB. Pam
mentioned <a href="https://www.wired.com/1994/03/economy-ideas/">The Economy of Ideas</a>
published in 1994 in the Wired magazine as the Barlow’s best contribution to
copyrights.</p>
<p><img src="https://kushaldas.in/images/jpb/cory.png" alt="" /></p>
<p>Cory Doctorow came up on stage to introduce the next speaker, Trevor Timm, the
executive director of Freedom of the Press Foundation (FPF). He particularly
mentioned <a href="https://securedrop.org">SecureDrop</a> project and the importance of
it. I want to emphasize one quote from him.</p>
<blockquote>
<p>It’s been observed that many people around the world, billions of people
struggle under bad code written by callow silicon valley dude bros, those who
hack up a few lines of code and then subject billions of people to it’s
outcomes without any consideration of ethics.</p>
</blockquote>
<p><img src="https://kushaldas.in/images/jpb/fpf_start.png" alt="" />
<img src="https://kushaldas.in/images/jpb/trevor.png" alt="" /></p>
<p>Trevor talked about the initial days of Freedom of the Press Foundation, and
how JPB was the organizational powerhouse behind the organization. On the day
FPF was launched, JPB and Daniel Ellsberg wrote <a href="https://www.huffingtonpost.com/daniel-ellsberg/wikileaks-funding_b_2313376.html">an
article</a>
for Huffingtonpost, named <strong>Crowd Funding the Right to Know</strong>.</p>
<blockquote>
<p>When a government becomes invisible, it becomes unaccountable. To expose its
lies, errors, and illegal acts is not treason, it is a moral responsibility.
Leaks become the lifeblood of the Republic.</p>
</blockquote>
<p>After few months of publishing the above mentioned article, one government
employee was moved by the words, and contacted FPF board members (through
<a href="https://twitter.com/micahflee">Micah Lee</a>). Later when his name become public,
Barlow posted the following tweet.</p>
<p><img src="https://kushaldas.in/images/jpb/jpb_snowden.png" alt="" /></p>
<p>Next, Edward Snowden himself came in as the 4th speaker in the panel. He told a
story which is not publicized much. He went back to his days in NSA where even
though he was high school drop out, he had a high salary and very comfortable
life. As he gained access to highly classified information, he realized that
something was not right.</p>
<blockquote>
<p>I realized what was legal, was not necessarily what was moral. I realized
what is being made public, was not the same of what was true. -- Edward
Snowden.</p>
</blockquote>
<p><img src="https://kushaldas.in/images/jpb/ed.png" alt="" /></p>
<p>He talked about how EFF and JPB’s work gave direction of many decisions of his
life. Snowden read Barlow’s <a href="https://www.eff.org/cyberspace-independence">A Declaration of the Independence of
Cyberspace</a> and perhaps that was
the first seed of radicalization in his life. How Barlow choose people over
living a very happy and easy life, shows his alliance with us, the common
people of the world.</p>
<p>After the first panel of speakers, Cory again took the stage to talk about
privacy and Internet. He spoke about why building technology which are safe for
world is important in this time of the history.</p>
<p>After a break of few minutes, the next panel of speakers came up on the stage,
the panel had Shari Steele, John Gilmore, Steven Levy, Joi Ito.</p>
<p><img src="https://kushaldas.in/images/jpb/second_panel.png" alt="" /></p>
<p>Shari was the first speaker in this group. While started talking about the
initial days of joining EFF, she mentioned how even without knowing about JPB
before, only one meeting converted Shari into a groupie. Describing the first
big legal fight of EFF, and how JPB wrote <em>A Declaration of the Independence of
Cyberspace</em> during that time. She chose a quote from the same:</p>
<blockquote>
<p>We are creating a world where anyone, anywhere may express his or her beliefs,
no matter how singular, without fear of being coerced into silence or
conformity.</p>
</blockquote>
<p>Later, John Gilmore pointed out a few quotes from JPB on LSD and how the
American society tries to control everything. John explained why he thinks
Barlow’s ideas were correct when it comes to psychedelic drugs and the effects
on human brains. He mentioned how JPB cautioned us about distinguishing the
data, information and the experience, in ways that are often forgotten today.</p>
<p>Next, Steven Levy kept skipping many different stories, choosing to focus on
how amazingly Barlow decided to express his ideas. The many articles JPB wrote,
helped to transform the view of web in our minds. Steven chose a quote from
JPB’s biography (which will be published in June) to share with us:</p>
<blockquote>
<p>If people code out for eight minutes like I did and then come back, they
usually do so as a different person than the one who left. But I guess my brain
doesn’t use all that much oxygen because I appeared to be the same guy, at
least from the inside. For eight minutes, however, I had not just been
gratefully dead, I had been plain, flat out, ordinary dead. It was then I
decided the time had finally come for me to begin working on my book. Looking
for a ghost writer was not really the issue. At the time, my main concern was
to not be a ghost before the book itself was done.</p>
</blockquote>
<p>I think Steven Levy chose the right words to describe Barlow in the last
sentence of his talk:</p>
<blockquote>
<p>Reading that book, makes me think that how much we are going to miss Barlow’s
voice in this scary time for tech when our consensual hallucination is looking
more and more like a bad trip.</p>
<p>When you talk to Dalai Lama, just like when you talk to John Perry Barlow,
there is a deep sense of humor that comes from knowing how f***** up the world
is, how unjust the world is, how terrible it is, but still being so connected
to true nature, that it is so funny. -- Joi Ito</p>
</blockquote>
<p>Joi mentioned that Barlow not only gave a direction to us by writing the
declaration of the independence of cyberspace, but, he also created different
organizations to make sure that we start moving that direction.</p>
<p>Amelia Barlow was the last speaker of the day. She went through the 25
Principles of Adult Behavior.</p>
<p>The day ended with a marching order from Cory Doctorow. He asked everyone to
talk more about the Internet and technologies and how they are affecting our
lives. If we think that everyone can understand the problems, that will be a
very false hope. Most people still don’t think much about freedom and how the
people in power control our lives using the same technologies we think are
amazing. Talking to more people and helping them to understand the problem is a
good start to the path of having a better future. And John Perry Barlow showed
us how to walk on that path with his extraordinary life and willfulness of
creating special bonds with everyone around him.</p>
<p>I want to specially thank the <a href="https://archive.org">Internet Archive</a> for
hosting the event and allowing the people like uswe who are in the cyberspace to
actually get the feeling of being in the room with everyone else.</p>
<p><a href="https://www.youtube.com/watch?v=Oaci9vlg_Sc">Recording of the event</a>
Header image copyright: EFF</p>2018-04-10T04:31:00+00:00Kushal DasNirbheek Chauhan: Latency in Digital Audiohttp://blog.nirbheek.in/2018/03/latency-in-digital-audio.html
<div style="text-align: left;" dir="ltr">We've come a long way since <a href="https://en.wikipedia.org/wiki/Invention_of_the_telephone" target="_blank">Alexander Graham Bell</a>, and everything's turned digital.<br /><br />Compared to analog audio, <a href="https://en.wikipedia.org/wiki/Digital_signal_processing" target="_blank">digital audio processing </a>is extremely versatile, is much easier to design and implement than analog processing, and also adds effectively zero noise along the way. With rising computing power and dropping costs, every operating system has had drivers, engines, and libraries to record, process, playback, transmit, and store audio for over 20 years.<br /><br /><div style="text-align: left;">Today we'll talk about the some of the differences between analog and digital audio, and how the widespread use of digital audio adds a new challenge: <i>latency</i>.</div><br /><h2 style="text-align: left;">Analog vs Digital</h2><div style="text-align: left;"><br /></div><div style="text-align: left;"><b>Analog data</b> flows like water through an empty pipe. You open the tap, and the time it takes for the first drop of water to reach you is the latency. When analog audio is transmitted through, say, an <a href="https://en.wikipedia.org/wiki/RCA_connector" target="_blank">RCA cable</a>, the transmission happens at the speed of electricity and your latency is:<code></code><br /><br /><div style="text-align: center;"><img src="https://nirbheek.in/files/blog/analog-latency.svg" alt="wire length/speed of electricity" /></div><br />This number is ridiculously small<span class="st">—</span>especially when compared to the speed of sound. An electrical signal takes 0.001 milliseconds to travel 300 metres (984 feet). Sound takes 874 milliseconds (almost a second).<br /><br />All analog effects and filters obey similar equations. If you're using, say, an analog pedal with an electric guitar, the signal is transformed continuously by an electrical circuit, so the latency is a function of the wire length (plus capacitors/transistors/etc), and is almost always negligible.<br /><br /><b>Digital audio</b> is transmitted in "packets" (buffers) of a particular size, like a <a href="https://en.wikipedia.org/wiki/Bucket_brigade" target="_blank">bucket brigade</a>, but at the speed of electricity. Since the real world is analog, this means to record audio, you must use an <a href="https://en.wikipedia.org/wiki/Analog-to-digital_converter" target="_blank">Analog-Digital Converter</a>. The <abbr title="Analog-Digital Converter">ADC</abbr> <a href="https://en.wikipedia.org/wiki/Quantization_(signal_processing)" target="_blank">quantizes</a> <a href="https://wiki.xiph.org/Videos/A_Digital_Media_Primer_For_Geeks" target="_blank">the signal</a> into digital measurements (samples), packs multiple samples into a buffer, and sends it forward. This means your latency is now: </div><br /><div style="text-align: center;"><img src="https://nirbheek.in/files/blog/digital-latency.svg" alt="(wire length/speed of electricity) + buffer size" /></div><div style="text-align: left;"><br />We saw above that the first part is insignificant, what about the second part?<br /><br />Latency is measured in time, but buffer size is measured in bytes. For <a href="https://en.wikipedia.org/wiki/Audio_bit_depth" target="_blank">16-bit integer audio</a>, each measurement (sample) is stored as a 16-bit integer, which is 2 bytes. That's the theoretical lower limit on the buffer size. The <a href="https://en.wikipedia.org/wiki/Sampling_(signal_processing)#Sampling_rate" target="_blank">sample rate</a> defines how often measurements are made, and these days, is usually 48KHz. This means each sample contains ~0.021ms of audio. To go lower, we need to increase the sample rate to 96KHz or 192KHz.<br /><br />However, when general-purpose computers are involved, the buffer size is almost never lower than 32 bytes, and is usually 128 bytes or larger. For <a href="https://en.wikipedia.org/wiki/Multichannel_audio">single-channel</a> 16-bit integer audio at 48KHz, a 32 byte buffer is 0.33ms, and a 128 byte buffer is 1.33ms. This is our buffer size and hence the base latency while recording (or playing) digital audio.<br /><br />Digital effects operate on individual buffers, and will add an additional amount of latency depending on the delay added by the CPU processing required by the effect. Such effects may also add latency if the algorithm used requires that, but that's the same with analog effects.<br /><br /><h2 style="text-align: left;">The Digital Age</h2><br />So everyone's using digital. But isn't 1.33ms a lot of additional latency?<br /><br />It might seem that way till you think about it in real-world terms. Sound travels less than half a meter (1<span class="st">½</span> feet) in that time, and that sort of delay is completely unnoticeable by humans<span class="st">—</span>otherwise we'd notice people's lips moving before we heard their words.<br /><br />In fact, 1.33ms is too small for the majority of audio applications!<br /><br />To process such small buffer sizes, you'd have to wake the CPU up <abbr title="1000 / 1.33">750 times a second</abbr>, just for audio. This is highly inefficient, and wastes a lot of power. You really don't want that on your phone or your laptop, and is completely unnecessary in most cases anyway. <br /><br />For instance, your music player will usually use a buffer size of ~200ms, which is just <i>5</i> CPU wakeups per second. Note that this doesn't mean that you will hear sound 200ms after hitting "play". The audio player will just send 200ms of audio to the sound card at once, and playback will begin immediately.<br /><br />Of course, you can't do that with live playback such as video calls<span class="st">—y</span>ou can't "read-ahead" data you don't have. You'd have to invent a time machine first. As a result, apps that use real-time communication have to use smaller buffer sizes because that directly affects the latency of live playback.<br /><br />That brings us back to efficiency. These apps also need to conserve power, and 1.33ms buffers are really wasteful. Most consumer apps that require low latency use 10-15ms buffers, and that's good enough for things like voice/video calling, video games, notification sounds, and so on.<br /><br /><h2 style="text-align: left;">Ultra Low Latency</h2><br />There's one category left: musicians, sound engineers, and other folk that work in the pro-audio business. For them, 10ms of latency is much too high!<br /><br />You usually can't notice a 10ms delay between an event and the sound for it, but when making music, you <i>can</i> hear it when two instruments are out-of-sync by 10ms or if the sound for an instrument you're playing is delayed. Instruments such as drum snare are more susceptible to this problem than others, which is why the <a href="https://en.wikipedia.org/wiki/Stage_monitor_system" target="_blank">stage monitors</a> used in live concerts must not add any latency.<br /><br />The standard in the music business is to use buffers that are 5ms or lower, down to the 0.33ms number that we talked about above.<br /><br />Power consumption is absolutely no concern, and the real problems are the accumulation of small amounts of latencies everywhere in your stack, and ensuring that you're able to read buffers from the hardware or write buffers to the hardware fast enough.<br /><br />Let's say you're using an app on your computer to apply digital effects to a guitar that you're playing. This involves capturing audio from the line-in port, sending it to the application for processing, and playing it from the sound card to your amp.<br /><br />The latency while capturing and outputting audio are both multiples of the buffer size, so it adds up very quickly. The effects app itself will also add a variable amount of latency, and at 1.33ms buffer sizes you will find yourself quickly approaching a 10ms latency from line-in to amp-out. The only way to lower this is to use a smaller buffer size, which is precisely what pro-audio hardware and software enables.<br /><br />The second problem is that of CPU scheduling. You need to ensure that the threads that are fetching/sending audio data to the hardware and processing the audio have the highest priority, so that nothing else will steal CPU-time away from them and cause glitching due to buffers arriving late.<br /><br />This gets harder as you lower the buffer size because the audio stack has to do more work for each bit of audio. The fact that we're doing this on a general-purpose operating system makes it even harder, and requires implementing <a href="https://en.wikipedia.org/wiki/Real-time_computing" target="_blank">real-time scheduling</a> features across several layers. But that's a story for another time!<br /><br />I hope you found this dive into digital audio interesting! My next post <span style="text-decoration: line-through;">will be</span> is about my journey in <a href="http://blog.nirbheek.in/2018/03/low-latency-audio-on-windows-with.html">implementing ultra low latency capture and render on Windows</a> in the <a href="https://msdn.microsoft.com/library/windows/desktop/dd371455.aspx" target="_blank">WASAPI</a> plugin for <a href="https://en.wikipedia.org/wiki/GStreamer" target="_blank">GStreamer</a>. This was already possible on Linux with the JACK GStreamer plugin and on macOS with the CoreAudio GStreamer plugin, so it will be interesting to see how the same problems are solved on Windows. Tune in!</div></div>2018-04-04T22:34:59+00:00NirbheekKartik Mistry: સ્ટ્રાવા ભાગ પાંચhttps://kartikm.wordpress.com/2018/04/04/strava-5/
<p><strong>સ્ટ્રાવા</strong> એપ સાથે સૌથી મોટી તકલીફ ઘણી વખત જીપીએસના લોચા છે. આ એપ ફોનના જીપીએસ પર આધારિત છે અને ફોનનું જીપીએસ તેમાં રહેેલી ચીપ પર. તેમાં આવતી ચીપ કઇ કંપનીની છે, તે આધારિત છે ફોનની કિંમત પર. એટલે સસ્તો ફોન, સસ્તું પરિણામ. જોકે એનો અર્થ એ નહી કે સ્ટ્રાવામાં મુશ્કેલી આવે જ. છતાં પણ આવે તો,</p>
<p>૧. મોબાઇલમાં બેટ્રી ઓપ્ટિમાઇઝર સ્ટ્રાવા એપ માટે બંધ કરી દેવું.<br />
૨. શક્ય હોય તો મોબાઇલ નેટવર્ક સ્ટ્રાવા શરૂ કર્યા પછી બંધ કરી દેવું. જેથી વધુ બેટ્રી ન વપરાય અને જીપીએસ મોબાઇલ ટાવરને પકડે નહી.<br />
૩. સ્ટ્રાવા ડેસ્કટોપ પર અંતર અને ઉંચાઇ ખોટી આવે તો સુધારી શકાય છે. વધુમાં, રાઇડ-રનને કટ-ક્રોપ પણ કરી શકાય છે.<br />
૪. ગારમિન કે જીપીએસ ઘડિયાળ વાપરવી <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f642.png" style="height: 1em;" alt="🙂" class="wp-smiley" /></p>
<p>બીજા કોઇ સૂચનો? અહીં જણાવવા વિનંતી!<br />
</p>2018-04-04T07:25:47+00:00કાર્તિકKushal Das: Using ZNC on Tor Network for Freenode and OFTChttps://kushaldas.in/posts/using-znc-on-tor-network-for-freenode-and-oftc.html
<p>The <a href="https://www.torproject.org">Tor network</a> provides a safer way to access
the Internet, without local ISP and government recording your every step on the
Internet. We can use the same network to chat over IRC. For many FOSS
contributors and activists across the world, IRC is a very common medium for a
chat. In this blog post, we will learn about how to use ZNC with Tor for IRC.</p>
<h3>Introducing ZNC</h3>
<p><a href="https://wiki.znc.in/ZNC">ZNC</a> is an IRC bouncer program, which will allow your
IRC client to stay detached from the server, but still receive and log the
messages, so that when you connect a client later on, you will receive all the
messages.</p>
<p>In this tutorial, we will use znc-1.6.6 (packaged in Fedora and EPEL). I am
also going to guess that you already figured out the <a href="https://wiki.znc.in/Introduction">basic
usage</a> of ZNC.</p>
<h3>Installing the required tools</h3>
<pre><code>$ sudo dnf install znc tor torsocks
</code></pre>
<p>Tor provides a SOCKS proxy at port <code>9050</code> (default value), but, ZNC cannot use
a SOCKS proxy easily. We will use <code>torify</code> command from torsocks package to use
the SOCKS proxy.</p>
<h3>ZNC service over Tor network</h3>
<p>As a first step, we will make sure that we have the listener at the ZNC service
listening as an Onion service. First, we will edit our <code>/etc/tor/torrc</code> file
and add the following.</p>
<pre><code>HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 8001 127.0.0.1:8001
HiddenServiceAuthorizeClient stealth hidden_service
</code></pre>
<p>After this, when we start the <code>tor</code> service, we will be able to find the
<em>.onion</em> address and the <em>HidServAuth</em> value from the
<code>/var/lib/tor/hidden_service/hostname</code> file.</p>
<pre><code># cat /var/lib/tor/hidden_service/hostname
34aaaiwlmrandom8.onion SomeO/+yOOPjvaluetext # client: hidden_service
</code></pre>
<p>Now, I will be using a user account <code>ftor</code> in the server to run ZNC. The
configuration files for ZNC is at <code>/home/ftor/.znc</code> directory.</p>
<p>I have the following values in the <code>~/.znc/configs/znc.conf</code> file for the
listener.</p>
<pre><code>&lt;Listener listener0&gt;
AllowIRC = true
AllowWeb = true
Host = 127.0.0.1
IPv4 = true
IPv6 = false
Port = 8001
SSL = false
URIPrefix = /
&lt;/Listener&gt;
</code></pre>
<p>Here, I am making sure that the listener only listens to the localhost. We
already mapped the port <code>8001</code> of localhost to our Onion service. This way the
web frontend of ZNC is only available over Tor.</p>
<p>Now you can start service, I will keep it running in the foreground along with
debugging messages to make sure that things are working.</p>
<pre><code>$ torify znc --debug
</code></pre>
<h3>Connecting from web client</h3>
<p>I am using xchat as the IRC client. I also have Tor installed on my local
computer and added the following line the <code>/etc/tor/torrc</code> file so that my
system can find and connect to the Onion service.</p>
<pre><code>HidServAuth 34aaaiwlmrandom8.onion SomeO/+yOOPjvaluetext
</code></pre>
<p>If you just want to connect to the ZNC web frontend using the Tor Browser, then
you will have to add the same line the <code>Browser/TorBrowser/Data/Tor/torrc</code>
inside of the Tor Browser.</p>
<p><img src="https://kushaldas.in/images/znc_loginscreen.png" alt="" /></p>
<h3>Connecting to OFTC network</h3>
<p>Now we will connect to the OFTC IRC network. <a href="https://www.torproject.org">The Tor
Project</a> itself has all the IRC channels on this
network. Make sure that you have a registered IRC nickname on this network.</p>
<p>Add the following configuration in the ZNC configuration file.</p>
<pre><code> &lt;Network oftc&gt;
Encoding = ^UTF-8
FloodBurst = 4
FloodRate = 1.00
IRCConnectEnabled = true
JoinDelay = 0
Nick = yournickname
Server = irc4.oftc.net +6697
&lt;Chan #tor&gt;
Buffer = 500
&lt;/Chan&gt;
&lt;/Network&gt;
</code></pre>
<p>Now let us start xchat with torify so that it can find our onion service.</p>
<pre><code>$ torify xchat
</code></pre>
<p>Next, we will add our new ZNC service address as a new server, remember to have
the password as <code>zncusername/networkname:password</code>. In the above case, the
network name is <em>oftc</em>.</p>
<p><img src="https://kushaldas.in/images/xchat_add_network_server.png" alt="" /></p>
<p>After adding the new server as mentioned above, you should be able to connect
to it using xchat.</p>
<h3>Connecting to Freenode network</h3>
<p>Freenode <a href="https://freenode.net/kb/answer/chat">provides an Onion service</a> to
it’s IRC network. This means your connection from the client (ZNC in this case)
to the server is end-to-end encrypted and staying inside of the Onion network
itself. But, using this will require some extra work.</p>
<h3>Creating SSL certificate for Freenode</h3>
<p>On the server, we will have to create an SSL certificate.</p>
<p><code>$ openssl req -x509 -sha256 -nodes -days 1200 -newkey rsa:4096 -out
user.pem -keyout user.pem</code></p>
<p>Remember to keep the name of the output file as <em>user.pem</em>, I had to spend a
few hours debugging thanks to a wrong filename.</p>
<p>We will have to find the fingerprint of the certificate by using the following
command.</p>
<pre><code>$ openssl x509 -sha1 -noout -fingerprint -in user.pem | sed -e 's/^.*=//;s/://g;y/ABCDEF/abcdef/'
eeeee345b4d9d123456789fa365f4b4b684b6666
</code></pre>
<p>Now connect to Freenode normally using your regular client (xchat in my case),
and add this fingerprint to your nickname.</p>
<pre><code>/msg NickServ CERT ADD eeeee345b4d9d123456789fa365f4b4b684b6666
</code></pre>
<p>You should be able to see the details using whois.</p>
<pre><code>/whois yournick
</code></pre>
<h3>Enable SASL and Cert module in ZNC</h3>
<p>Next, we will move the certificate file to the right location so that ZNC can
use it.</p>
<p><code>$ cp user.pem ~/.znc/users/&lt;yourzncuser&gt;/moddata/cert/user.pem</code></p>
<p>Remember to put the right ZNC username in the above command.</p>
<p>Add the following configuration for <em>freenode</em> network in the ZNC configuration
file and restart ZNC.</p>
<pre><code> &lt;Network freenode&gt;
FloodBurst = 4
FloodRate = 1.00
IRCConnectEnabled = true
JoinDelay = 0
LoadModule = simple_away
LoadModule = cert
LoadModule = sasl
Nick = yourusername
Server = freenodeok2gncmy.onion +6697
TrustedServerFingerprint = 57:2d:6f:dc:90:27:0e:17:b6:89:46:4f:6a:a4:37:6e:e9:20:e1:cd:ee:f5:42:cd:3c:5a:a8:6d:17:16:f8:71
&lt;Chan #znc&gt;
&lt;/Chan&gt;
&lt;/Network&gt;
</code></pre>
<p>Remember to update the nickname. At the end of the blog post, I will explain
more about the server fingerprint.</p>
<p>Next, go to the <code>\*status</code> tab in your client, and give the following commands
to load <a href="http://wiki.znc.in/Cert">cert</a> and <a href="http://wiki.znc.in/Sasl">sasl</a>
modules.</p>
<pre><code>/query *status
loadmod cert
loadmod sasl
/msg *sasl Mechanism EXTERNAL
/query *status
Jump
</code></pre>
<p>The <code>Jump</code> command will try to reconnect to the Freenode IRC server. You should
be able to see the debug output in the server for any error.</p>
<h3>The story of the server fingerprint for Freenode</h3>
<p>Because Freenode’s SSL certificate is not an EV certificate for the <em>.onion</em>
address, ZNC will fail to connect normally. We will have to add the server
fingerprint to the configuration so that we can connect. But, this step was
failing for a long time, and the excellent folks in #znc helped me to debug the
issue step by step. It seems the <a href="https://freenode.net/kb/answer/chat">fingerprint given on the Freenode
site</a> is an old one, and we need the
current fingerprint. We also have an
<a href="https://github.com/znc/znc/issues/1507">issue</a> filed on a related note.</p>
<p>Finally, you may want to run the ZNC as a background process on the server.</p>
<pre><code>$ torify znc
</code></pre>
<h3>Tools versions</h3>
<ul>
<li>ZNC 1.6.6</li>
<li>tor 0.3.2.10</li>
<li>torsocks 2.2.0</li>
</ul>
<p>If you have queries, feel free to join #znc on Freenode and #tor on OFTC
network and ask for help.</p>
<h4>Updated post</h4>
<p>I have updated the post to use torify command. This will make running znc much
simpler than the tool mentined previously.</p>2018-04-03T06:47:00+00:00Kushal DasSuchakra Sharma: So, what’s this AppSwitch thing going around?https://suchakra.wordpress.com/2018/03/31/so-whats-this-appswitch-thing-going-around/
<p><span style="font-weight: 400;">I recently read Jérôme Petazzoni’s <a href="https://jpetazzo.github.io/2018/03/13/appswitch-hyperlay-network-stack-future/" target="_blank" rel="noopener">blog post</a></span><span style="font-weight: 400;"> about a tool called <a href="http://appswitch.io" target="_blank" rel="noopener">AppSwitch</a> which made some Twitter waves on the busy interwebz. I was intrigued. It turns out that it was something that I was familiar with. When I met <a href="https://www.linkedin.com/in/subhraveti" target="_blank" rel="noopener">Dinesh</a> back in 2015 at Linux Plumbers in Seattle, he had presented me with a grand vision of how applications needs to be free of any networking constraints and configurations and a uniform mechanism should evolve that make such configurations transparent (I’d rather say opaque now). There are layers over layers of network related abstractions. Consider a simple network call made by a java application. It goes through multiple layers in userspace (though the various libs, all the way to native calls from JVM and eventually syscalls) and then multiple layers in kernel-space (syscall handlers to network subsytems and then to driver layers and over to the hardware). Virtualization adds 4x more layers. Each point in this chain does have a justifiable unique configuration point. Fair point. But from an application’s perspective, it feels like fiddling with the knobs all the time :</span></p>
<figure style="width: 526px;" class="wp-caption aligncenter" id="attachment_1480"><img src="https://suchakra.files.wordpress.com/2018/03/christmas_settings.png?w=739" alt="christmas_settings" class=" size-full wp-image-1480 aligncenter" /><span style="font-weight: 400;">Christmas Settings: <a href="https://xkcd.com/1620/" rel="nofollow">https://xkcd.com/1620/</a></span></figure>
<p><span style="font-weight: 400;">For example, we have of course grown around iptables and custom in-kernel and out of kernel load balancers and even enhanced some of them to exceptional performance (such as XDP based load balancing). But when it comes to data path processing, doing nothing at all is much better than doing something very efficiently. Apps don’t really have to care about all these myriad layers anyway. So why not add another dimension to this and let this configuration be done at the app level itself? Interesting.. </span><a href="https://emojipedia.org/thinking-face/"><img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f914.png" style="height: 1em;" alt="🤔" class="wp-smiley" /></a></p>
<p><span style="font-weight: 400;">I casually asked Dinesh to see how far the idea had progressed and he ended up giving me a single binary and told me that’s it! It seems AppSwitch had been finally baked in the oven.</span></p>
<h2><span style="font-weight: 400;">First Impressions</span></h2>
<p><span style="font-weight: 400;">So there is a single static binary named <strong><code>ax</code></strong> which runs as an executable as well as in a daemon mode. It seems AppSwitch is distributed as a docker image as well though. I don’t see any kernel module (unlike what Jerome tested). This is definitely the userspace version of the same tech. </span></p>
<p><span style="font-weight: 400;">I used the <strong><code>ax</code></strong> docker image. <strong>ax</strong> was both installed and running with one docker-run command.</span></p>
<pre><span style="font-weight: 400;"><code><strong>$ docker run -d --pid=host --net=none -v /usr/bin:/usr/bin -v /var/run/appswitch:/var/run/appswitch --privileged docker.io/appswitch/ax</strong></code> </span></pre>
<p><span style="font-weight: 400;">Based on the documentation, this little binary seems to do a lot — service discovery, load balancing, network segmentation etc. But I just tried the basic features in a single-node configuration. </span></p>
<p><span style="font-weight: 400;">Let’s run a <a href="http://www.jibble.org/miniwebserver/" target="_blank" rel="noopener">Java webserver </a></span><span style="font-weight: 400;">under <strong><code>ax</code></strong>. </span></p>
<pre><strong><code># ax run --ip 1.2.3.4 -- java -jar SimpleWebServer.jar</code></strong></pre>
<p><span style="font-weight: 400;">This starts the webserver and assigns the ip 1.2.3.4 to it. Its like overlaying the server’s own IP configurations through ax such that all request are then redirected through 1.2.3.4. While idling, I didn’t see any resource consumption in the ax daemon. If it was monitoring system calls with <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/chap-system_auditing" target="_blank" rel="noopener">auditd</a> or something, I’d have noticed some CPU activity. Well, the server didn’t break, and when accessed via a client run through ax, it starts serving just fine.</span></p>
<pre><strong><code># ax run <b>--ip 5.6.7.8</b> -- curl -I 1.2.3.4
HTTP/1.0 500 OK
Date: Wed Mar 28 00:19:25 PDT 2018
Server: JibbleWebServer/1.0
Content-Type: text/html
Expires: Thu, 01 Dec 1994 16:00:00 GMT
Content-Length: 58</code> </strong><span style="font-weight: 400;"><strong>Last-modified: Wed Mar 28 00:19:25 PDT 2018</strong> </span></pre>
<p><span style="font-weight: 400;">Naaaice! <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f642.png" style="height: 1em;" alt="🙂" class="wp-smiley" /> Why not try connecting with Firefox. Ok, wow, this works too! </span></p>
<p><img src="https://suchakra.files.wordpress.com/2018/03/ax-firefox1.png?w=739" alt="ax-firefox" class="alignnone size-full wp-image-1482" /></p>
<p><span style="font-weight: 400;">I tried this with a Golang http server (Caddy) that is statically linked. If <strong><code>ax</code></strong> was doing something like <strong><code>LD_PRELOAD</code></strong>, that would trip it up. This time I tried passing a name rather than the IP and ran it as regular user with a built-in <strong><code>--user</code> </strong>option</span></p>
<pre><strong><code>#<b> ax run --myserver --user suchakra -- caddy -port 80</b>
#<b> ax run --user suchakra -- curl -I myserver</b>
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 0
Content-Type: text/html; charset=utf-8
Etag: "p6f4lv0"
Last-Modified: Fri, 30 Mar 2018 19:25:07 GMT
Server: Caddy
Date: Sat, 31 Mar 2018 01:52:28 GMT</code></strong></pre>
<p><span style="font-weight: 400;">So no kernel module tricks, it seems. I guess this explains why Jerome called it “Network Stack from the future”. The future part here is applications and with predominant containerized deployments, the problems of microservices networking have really shifted near to the apps. </span></p>
<p><span style="font-weight: 400;">We need to get rid of the overhead caused by networking layers and frequent context switches happening as a single containerized app communicates with another one. AppSwitch could potentially just eliminate this all together and the communication would actually resemble traditional socket based IPC mechanisms with an advantage of a zero overhead read/write cost once the connection is established. I think I would want to test this out thoroughly sometime in the future if i get some time off from my bike trips <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f642.png" style="height: 1em;" alt="🙂" class="wp-smiley" /></span></p>
<h2><span style="font-weight: 400;">How does it work?</span></h2>
<p><span style="font-weight: 400;">Frankly I don’t know in-depth, but I can guess. All applications, containerized or not, are just a bunch of executables linked to libs (or built statically) running over the OS. When they need OS’s help, they ask. To understand an application’s behavior or to morph it, OS can help us understand what is going on and provide interfaces to modify its behavior. Auditd for example, when configured, can allow us to monitor every syscall from a given process. Programmable LSMs can be used to set per-resource policies through kernel’s help. For performance observability, tracing tools have traditionally allowed an insight into what goes on underneath. In the world of networking, we again take the OS’s help – routing and filtering strategies are still defined through iptables with some advances happening in BPF-XDP. However, in the case of networking, calls such as <strong><code>connect()</code></strong>, <strong><code>accept()</code></strong> could be intercepted purely in userspace as well. But doing so robustly and efficiently without application or kernel changes with reasonable performance has been a hard academic problem for decades </span><b>[1][2].</b><span style="font-weight: 400;"> There must be some other smart things at work underneath in <strong><code>ax</code></strong> to keep this robust enough for all kinds of apps. With interception problem solved, this would allow <strong><code>ax</code></strong> to create a map and actually perform the ‘switching’ part (which I suppose justifies the AppSwitch name). I have tested it presently on a Java, Go and a Python server. With network syscall interception seemingly working fine, the data then flows like hot knife on butter. There may be some more features and techniques that I may have missed though. Going through <strong><code>ax --help</code></strong> it seems there are some options for egress, WAN etc, but I haven’t played it with that much. </span></p>
<h3>Some Resources</h3>
<ul>
<li>Slack Channel : <a href="https://slofile.com/slack/appswitch" target="_blank" rel="noopener">https://slofile.com/slack/appswitch</a></li>
<li>DockerHub (containerized binary): <a href="https://hub.docker.com/r/appswitch/ax/" target="_blank" rel="noopener">https://hub.docker.com/r/appswitch/ax/</a> (you need to request to be added to repo)</li>
</ul>
<h3>References</h3>
<p><span style="font-weight: 400;">[1] Practical analysis of stripped binary code [<a href="ftp://ftp.cs.wisc.edu/paradyn/papers/Harris05WBIA.pdf" target="_blank" rel="noopener">link</a>]<br />
[2] Analyzing Dynamic Binary Instrumentation Overhead [<a href="https://pdfs.semanticscholar.org/8d3d/42706198efef0e0987c570bf4690a20334a1.pdf" target="_blank" rel="noopener">link</a>]</span></p>2018-03-31T19:54:23+00:00suchakraShrinivasan: Project Idea – Mobile application to explain about the place I am inhttps://goinggnu.wordpress.com/2018/03/30/project-idea-mobile-application-to-explain-about-the-place-i-am-in/
<p>Few months ago, I was talking to Siva. He is part of Tamil Heritage Activities. They take people to Mahabalipuram often and explain all its great history.</p>
<p>They are looking for a mobile app to auto explain about the nearby place, by getting the geolocation of the person.</p>
<p>if you goto near to tiger cave of mahabalipuram, the app should explain all about tiger cave. The data can be provided from a wikipedia page or some custom webservice.</p>
<p>We can extend the same to any place. Like, extending the same for all the temples in kanchipuram or even all over world.</p>
<p>Wikipedia and wikidata can be great starting points for providing required information.</p>
<p>Now, we are looking for android/ios developers to develop this as an open source mobile application.</p>
<p>If you are interested in this, mail me on tshrinivasan@gmail.com or reply here.</p>
<p>Thanks.</p>2018-03-30T18:17:26+00:00tshrinivasanDebarshi Ray: The ways of the GNOME peoplehttps://debarshiray.wordpress.com/2018/03/30/the-ways-of-the-gnome-people/
<p>
<em>This is a serious post. Or is it? <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f609.png" style="height: 1em;" alt="😉" class="wp-smiley" /></em>
</p>
<p>
Hidden away in the farthest corner of the planet, its slopes covered in mist and darkness and its peaks lost in the clouds, stands the formidable Mount GNOME. Perched atop the mountain is a castle as menacing as the mountain itself – its towering walls made of stones as cold as death, and the wind howling through the courtyard like a dozen witches screaming for blood.
</p>
<p>
Living inside the imposing blackness are a group of feral savages, of whom very little is known to the world outside. The deathly walls of the castle bear testimony to their skull-crushing barbarism, and their vile customs have laid waste to the surrounding slopes and valleys. Mortally fearful of invoking their mad wrath, no human traveller has dared to come near the vicinity of their territory. Shrouded in anonymity, they draw their name from the impregnable mountain that they inhabit – they are the GNOME people.
</p>
<p>
Legend has it that they are unlike any human settlement known to history. Some say that they are barely human. They are like a foul amorphous mass that glides around Mount GNOME, filling the air with their fiendish thoughts, and burning every leaf and blade of grass with their fierce hatred. Living in an inferno of collectivism, the slightest notion of individuality is met with the harshest of punishments. GNOMEies are cursed with eternal bondage to the evil spirits of the dark mountain.
</p>
<p>
<em>Happy Easter!</em></p>2018-03-30T09:48:29+00:00Debarshi RayJaikiran Pai: Ant 1.10.3 released with JUnit 5 supporthttps://jaitechwriteups.blogspot.com/2018/03/ant-1103-released-with-junit-5-support.html
<div style="text-align: left;" dir="ltr">We just <a href="https://www.mail-archive.com/user@ant.apache.org/msg42755.html">released</a> 1.9.11 and 1.10.3 versions of Ant today. The downloads are available on the <a href="https://ant.apache.org/bindownload.cgi">Ant project's download page</a>. Both these releases are mainly bug fix releases, especially the 1.9.11 version. The 1.10.3 release is an important one for a couple of reasons. The previous 1.10.2 release, unintentionally introduced a bunch of changes which caused regressions in various places in Ant tasks. These have now been reverted or fixed in this new 1.10.3 version. <br /><br />In addition to these fixes, this 1.10.3 version of Ant introduces a new <a href="https://ant.apache.org/manual/Tasks/junitlauncher.html">junitlauncher</a> task. A while back, the JUnit team has released <a href="https://junit.org/junit5/">JUnit 5.x version</a>. This version is a major change from previous JUnit 3.x &amp; 4.x versions, both in terms of how tests are written and how they are executed. JUnit 5 introduces a separation between test launching and test identification and execution. What that means is, for build tools like Ant, there's now a clear API exposed by JUnit 5 which is solely meant to deal with how tests are launched. Imagine something along the lines of "launch test execution for classes within this directory". Although Ant's <a href="https://ant.apache.org/manual/Tasks/junit.html">junit</a> task already supported such construct, the way we used to launch those tests was very specific to Ant's own implementation and was getting more and more complex. With the introduction of this new API within the JUnit 5 library, it's much more easier and consistent now to launch these tests.<br /><br />JUnit 5, further introduces the concept of test engines. Test engines are responsible for "identifying" which classes are actually tests and what semantics to apply to those tests. JUnit 5 by default comes with a "vintage" engine which identifies and runs JUnit 4.x style tests and a "jupiter" engine which identifies and runs JUnit 5.x API based tests. <br /><br />The "junitlauncher" task in Ant introduces a way to let the build specify which classes to choose for test launching. The goal of this task is to just launch the test execution and let the JUnit 5 framework identify and run the tests. The current implementation shipped in Ant 1.10.3, is the basic minimal for this task. We plan to add more features as we go along and as we get feedback on it. Especially, this new task doesn't currently support executing these tasks in a separate forked JVM, but we do plan to add that in a subsequent release. <br /><br />The junit task which has been shipped in Ant since long time back, will continue to exist and can be used for executing JUnit 3.x or JUnit 4.x tests. However, for JUnit 5 support, the junitlauncher task is what will be supported in Ant.<br /><br />More details about this new task can be found in the <a href="https://ant.apache.org/manual/Tasks/junitlauncher.html">junitlauncher's task manual</a>. Please give it a try and report any bugs or feedback to our <a href="https://ant.apache.org/mail.html">user mailing list</a>.</div>2018-03-28T09:21:49+00:00JaikiranKartik Mistry: સ્ટ્રાવા ભાગ ચોથોhttps://kartikm.wordpress.com/2018/03/27/strava-4/
<p>આ શ્રેણીમાં ચોથી પોસ્ટ. ગઇકાલે ત્રીજી પોસ્ટ કરી ત્યારે નિઝિલે કહ્યું કે સ્ટ્રાવા ઓપનસ્ટ્રીટમેપનો નકશો વાપરે છે તે સારું કહેવાય. હા. સ્ટ્રાવાનો નકશો અધૂરો જોઇને જ મારું <a href="https://www.openstreetmap.org" target="_blank" rel="noopener">ઓપનસ્ટ્રીટમેપ</a>નું યોગદાન શરૂ થયેલું. આપણી ફેવરિટ જગ્યાઓ જેવી કે માસ્ટરમાઇન્ડ, સિક્રેટ સ્પાઇસ (બી.આર.એમ.ની જગ્યા), વિવિધ ચા વાળાઓ, આજુ-બાજુના બિલ્ડિંગો, અગત્યના પોઇન્ટ્સ વગેરે સ્ટ્રાવાને કારણે જ ઉમેરવામાં આવ્યા છે. સ્ટ્રાવા મોબાઇલ એપ્સમાં ગુગલ મેપ્સ વાપરે છે, જે ખટકે છે, પણ ઠીક છે. સ્ટ્રાવના જીપીએસ તમે ઓપનસ્ટ્રીટમેપમાં અપલોડ કરી શકો છો. જોકે હું ઓફરોડિંગ કરતો નથી એટલે આ તક હજુ મળી નથી. હાઇકિંગ અને ટ્રેકિંગ કરતા લોકોએ અહીં યોગદાન આપવા જેવું છે.</p>
<p>આ પણ જુઓ:<br />
* ભાગ ૧: <a href="https://kartikm.wordpress.com/2017/03/09/strava/" target="_blank" rel="noopener">https://kartikm.wordpress.com/2017/03/09/strava</a><br />
* ભાગ ૨: <a href="https://kartikm.wordpress.com/2018/03/25/strava-2/" target="_blank" rel="noopener">https://kartikm.wordpress.com/2018/03/25/strava-2</a><br />
* ભાગ ૩: <a href="https://kartikm.wordpress.com/2018/03/26/strava-3/" target="_blank" rel="noopener">https://kartikm.wordpress.com/2018/03/26/strava-3/</a></p>2018-03-27T14:17:45+00:00કાર્તિકKartik Mistry: બ્લોગના બાર વર્ષhttps://kartikm.wordpress.com/2018/03/26/12-years-of-blog/
<p><a href="https://kartikm.wordpress.com/2006/03/26/first-post-and-baxi-babu/" target="_blank" rel="noopener">૧૨ વર્ષ પહેલાં આ બ્લોગ</a> શરૂ થયેલો આ બ્લોગ હજુ ટીનએજર્સ બન્યો નથી. હજુ સુધી તો બ્લોગ-બ્લોગિંગમાં મઝા આવે છે. જોઇએ હવે ક્યારે આ મઝા પૂરી થાય છે. બ્લોગ-રનિંગ-સાયકલિંગ-જીવન. કોને અગ્રતા આપવી એ હજુ નક્કી નથી, પણ અત્યારે આ ચાર વસ્તુઓ લગભગ સમાંતર ચાલે છે. કોઇક વખત એમાંથી કોઇ આગળ નીકળે છે અને કોઇ પાછળ રહી જાય છે. પણ, એકંદરે ચારેયમાંથી કોઇ હાંફ્યું નથી.</p>
<p>એક સંબંધિત અને સરસ સમાચાર: <a href="https://gu.wikipedia.org/wiki/%E0%AA%9A%E0%AA%82%E0%AA%A6%E0%AB%8D%E0%AA%B0%E0%AA%95%E0%AA%BE%E0%AA%82%E0%AA%A4_%E0%AA%AC%E0%AA%95%E0%AB%8D%E0%AA%B7%E0%AB%80" target="_blank" rel="noopener">ચંદ્રકાંત બક્ષી</a> અને અન્ય કેટલાય લેખકોના મસ્ત ફોટાઓ સંજયભાઇએ વિકિપીડિયામાં અપલોડ કર્યા છે. <a href="https://kartikm.wordpress.com/2013/03/25/missing-baxibabu/" target="_blank" rel="noopener">કેટલાય વર્ષોની ઇચ્છા</a> ફળી છે. સંજયભાઇ અને અનંતનો આભાર અને તેમના પરથી કેટલાય લોકો પ્રેરણા લે તેવી ઇચ્છા!</p>
<p>આ પણ જુઓ:<br />
* ૩ વર્ષ: <a href="https://kartikm.wordpress.com/2009/03/25/3-years-2/" target="_blank" rel="noopener">https://kartikm.wordpress.com/2009/03/25/3-years-2/</a><br />
* ૪ વર્ષ: <a href="https://kartikm.wordpress.com/2010/03/25/not-yet-missing-blog/" target="_blank" rel="noopener">https://kartikm.wordpress.com/2010/03/25/not-yet-missing-blog/</a><br />
* ૫ વર્ષ: <a href="https://kartikm.wordpress.com/2011/03/25/towards-6th-year/" target="_blank" rel="noopener">https://kartikm.wordpress.com/2011/03/25/towards-6th-year/</a><br />
* ૬ વર્ષ: <a href="https://kartikm.wordpress.com/2012/03/26/happy-birthday-my-blog/" target="_blank" rel="noopener">https://kartikm.wordpress.com/2012/03/26/happy-birthday-my-blog/</a><br />
* ૮ વર્ષ: <a href="https://kartikm.wordpress.com/2014/03/25/આઠ-વર્ષ/" target="_blank" rel="noopener">https://kartikm.wordpress.com/2014/03/25/આઠ-વર્ષ/</a><br />
* ૧૦ વર્ષ: <a href="https://kartikm.wordpress.com/2016/03/26/10-years-blog/" target="_blank" rel="noopener">https://kartikm.wordpress.com/2016/03/26/10-years-blog/</a></p>2018-03-26T13:20:06+00:00કાર્તિકKartik Mistry: સ્ટ્રાવા ભાગ ત્રીજોhttps://kartikm.wordpress.com/2018/03/26/strava-3/
<p>આ પોસ્ટ <em>સ્ટ્રાવા શ્રેણી</em>માં ત્રીજી પોસ્ટ છે. આ વાત છે એક સાથે બે-ત્રણ ડિવાઇસ-મોબાઇલ-ગારમિન કે પછી ઘડિયાળોમાં સ્ટ્રાવાની એક્ટિવિટી રેકોર્ડ કરીને અપલોડ કરતા લોકો માટે. અમુક લોકોને પોતાની એક્ટિવિટી એટલી હદે ગમે કે ગમે તે થાય તેને રેકોર્ડ કરીને અપલોડ કરવી જ. ઘણી વખત એવું થાય કે ફોન કે પછી ગારમિન છેલ્લી ઘડીએ દગો દે અને રાઇડ કે રન રેકોર્ડ જ ન થાય (મારી સાથે એવું એક-બે વખત બન્યું છે, એમાં પણ એક વખત તો રેસમાં). આવું ટાળવા માટે લોકો બે ડિવાઇસ (દા.ત. ગારમિન અને ફોન બંનેમાં) રેકોર્ડ કરે. રેકોર્ડ કરે તો વાંધો નહી પણ સ્ટ્રાવા પર પણ અપલોડ કરે. એટલે એવું આવે, “કાર્તિક મિસ્ત્રી ગ્રુપ રાઇડ વિથ કાર્તિક મિસ્ત્રી”. એટલે કે હું સાઇકલ ચલાવું છું, મારી સાથે! <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f642.png" style="height: 1em;" alt="🙂" class="wp-smiley" /> હા, આવી રાઇડ અપલોડ થાય તો પણ વાંધો નહી પણ બંને રાઇડમાંથી એક રાઇડ તો દૂર કરો! પણ લોકો સમજતા જ નથી <img src="https://s1.wp.com/wp-content/mu-plugins/wpcom-smileys/uneasy.svg" style="height: 1em;" height="16" draggable="false" width="16" alt=":/" class="wp-smiley emoji" /></p>
<p>આવા લોકો માટે અમે એક <em>ક્લબ</em> બનાવી છે, <a href="https://www.strava.com/clubs/ycsdrut" target="_blank" rel="noopener">Yes, You can safely delete ride uploaded twice!!</a> આજે જ જોડાઓ!</p>
<p>PS: લાગતા વળગતાઓને સૂચના <strong>ન</strong> આપ્યા બદલ ક્ષમા!</p>
<p>આ પણ જુઓ:<br />
* ભાગ ૧: <a href="https://kartikm.wordpress.com/2017/03/09/strava/" target="_blank" rel="noopener">https://kartikm.wordpress.com/2017/03/09/strava</a><br />
* ભાગ ૨: <a href="https://kartikm.wordpress.com/2018/03/25/strava-2/" target="_blank" rel="noopener">https://kartikm.wordpress.com/2018/03/25/strava-2</a></p>2018-03-26T03:42:23+00:00કાર્તિકKartik Mistry: સ્ટ્રાવા ભાગ બીજોhttps://kartikm.wordpress.com/2018/03/25/strava-2/
<p>* આના પહેલા <a href="https://kartikm.wordpress.com/2017/03/09/strava/">સ્ટ્રાવા</a> પોસ્ટમાં લખ્યું તેમ શનિ-રવિ રાઇડ કે રનિંગ કર્યા પછી હું ખોટી કે ભૂલ ભરેલી સ્ટ્રાવા એક્ટિવિટીને શોધવા બેસું છું. આ કામ મને ભોરઘાટ પર સાયકલિંગ કે પછી વિકિપીડિયામાં વેન્ડેલિઝમ થયેલ ફેરફાર પાછો લાવતી વખતે થતા આનંદ જેટલો જ આનંદ આપે છે. આવી સાચી સરકાસ્ટિક મઝા જીવનમાં હોવી જોઇએ. હા, આજે આ પોસ્ટ લખવાનું કારણ એક રનર છે. મેં તેને દોડતા જોયેલ છે. સારો રનર છે, ડોક્ટર છે એટલે એવું નથી કે અભણ છે. બીજા રનર જોડે ખાતરી કરી કે છોકરો છે, સિક્સ પેક્સના ફોટા પણ મૂક્યા છે. પણ, તે સ્ટ્રાવામાં જેન્ડર છોકરી રાખે છે. આ બાબતે તેનું ધ્યાન દોર્યું તો તેણે કહ્યું કે આભાર. પણ હજુ તેણે સુધાર્યું નથી. વેલ, તેની મરજી કે જે બનવું હોય તે બને પણ કેટલીય વુમન્સના રેકોર્ડ ખોટા પડે તે ખોટું કહેવાય!</p>
<p>આવી ત્રીજી સ્ટ્રાવા સુધારણા પોસ્ટ કાલે. પણ એ પહેલા લાગતા વળગતાઓને સૂચના આપ્યા પછી!! <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f642.png" style="height: 1em;" alt="🙂" class="wp-smiley" /></p>2018-03-25T08:24:41+00:00કાર્તિકNirbheek Chauhan: Low-latency audio on Windows with GStreamerhttp://blog.nirbheek.in/2018/03/low-latency-audio-on-windows-with.html
<div style="text-align: left;" dir="ltr">Digital audio is so ubiquitous that we rarely stop to think or wonder how the gears turn underneath our all-pervasive apps for entertainment. Today we'll look at one specific piece of the machinery: latency.<br /><br />Let's say you're making a video of someone's birthday party with an app on your phone. Once the recording starts, you don't care when the app starts writing it to disk<span class="st">—</span>as long as everything is there in the end.<br /><br />However, if you're having a Skype call with your friend, it matters a <i>whole lot</i> how long it takes for the video to reach the other end and vice versa. It's impossible to have a conversation if the lag (latency) is too high.<br /><br />The difference is, do you need real-time feedback or not?<br /><br />Other examples, in order of increasingly stricter latency requirements are: live video streaming, security cameras, augmented reality games such as <a href="https://en.wikipedia.org/wiki/Pok%C3%A9mon_Go" target="_blank">Pokémon Go</a>, multiplayer video games in general, audio effects apps for live music recording, and many many more.<br /><br />“But Nirbheek”, you might ask, “why doesn't everyone always ‘immediately’ send/store/show whatever is recorded? Why do people have to worry about latency?” and that's a great question!<br /><br />To understand that, checkout my previous blog post, <a href="http://blog.nirbheek.in/2018/03/latency-in-digital-audio.html" target="_blank">Latency in Digital Audio</a>. It's also a good primer on analog vs digital audio!<br /><br /><h2 style="text-align: left;">Low latency on consumer operating systems</h2><div style="text-align: left;"><br /></div><div style="text-align: left;">Each operating system has its own set of application APIs for audio, and each has a lower bind on the achievable latency:</div><div style="text-align: left;"><br /></div><ul style="text-align: left;"><li>Linux has <a href="https://www.alsa-project.org/main/index.php/ALSA_Library_API" target="_blank">alsa-lib</a> (old), <a href="https://en.wikipedia.org/wiki/Pulseaudio" target="_blank">Pulseaudio</a> (standard), <a href="https://en.wikipedia.org/wiki/JACK_Audio_Connection_Kit" target="_blank">JACK</a> (pro-audio), and <a href="https://pipewire.org/" target="_blank">Pipewire</a> (<a href="https://blogs.gnome.org/uraeus/2018/01/26/an-update-on-pipewire-the-multimedia-revolution-an-update/" target="_blank">under development</a>)</li><li>macOS and iOS have <a href="https://en.wikipedia.org/wiki/Core_Audio" target="_blank">CoreAudio</a> (standard, pro-audio)</li><li>Android has <a href="https://source.android.com/devices/audio/" target="_blank">AudioFlinger</a> (Java API, android.media), <a href="https://en.wikipedia.org/wiki/OpenSL_ES" target="_blank">OpenSL ES</a> (C/C++ API), and <a href="https://source.android.com/devices/audio/aaudio" target="_blank">AAudio</a> (C/C++ API, new, pro-audio)</li><li>Windows has <a href="https://en.wikipedia.org/wiki/Directsound" target="_blank">DirectSound</a> (deprecated), <a href="https://en.wikipedia.org/wiki/Technical_features_new_to_Windows_Vista#Audio_stack_architecture" target="_blank">WASAPI</a> (standard), and <a href="https://en.wikipedia.org/wiki/Audio_Stream_Input/Output" target="_blank">ASIO</a> (proprietary, old, pro-audio).</li><li>BSDs still use <a href="https://en.wikipedia.org/wiki/Open_Sound_System">OSS</a></li></ul><div style="text-align: left;"><br /></div><div style="text-align: left;">GStreamer already has plugins for almost all of these<a href="http://blog.nirbheek.in/feeds/posts/default#gst-plugins">¹</a> (plus others that aren't listed here), and on Windows, GStreamer has been using the DirectSound API by default for audio capture and output since the very beginning.<br /><br />However, the DirectSound API was deprecated in Windows XP, and with Vista, it was removed and replaced with an emulation layer on top of the newly-released WASAPI. As a result, the plugin can't be configured to have less than 200ms of latency, which makes it unsuitable for all the low-latency use-cases mentioned above. The DirectSound API is quite crufty and unnecessarily complex anyway.<br /><br />GStreamer is rarely used in video games, but it is widely used for live streaming, audio/video calls, and other real-time applications. Worse, the WASAPI GStreamer plugins were effectively untouched and unused since the initial implementation in 2008 and were completely broken<a href="http://blog.nirbheek.in/feeds/posts/default#gst-windows">²</a>.<br /><br />This left no way to achieve low-latency audio capture or playback on Windows using GStreamer.<br /><br />The situation became particularly dire when GStreamer added a new <a href="http://blog.nirbheek.in/2018/02/gstreamer-webrtc.html">implementation of the WebRTC spec</a> in this <a href="https://gstreamer.freedesktop.org/releases/1.14/">release cycle</a>. People that try it out on Windows were going to see much higher latencies than they should.<br /><br />Luckily, I rewrote most of the WASAPI plugin code in January and February, and it should now work well on all versions of Windows from Vista to 10! You can get <a href="https://gstreamer.freedesktop.org/data/pkg/windows/1.14.0.1/">binary installers for GStreamer</a> or <a href="https://gstreamer.freedesktop.org/documentation/installing/building-from-source-using-cerbero.html">build it from source</a>.<br /><br /><h2 style="text-align: left;">Shared and Exclusive WASAPI</h2><br />WASAPI allows applications to open sound devices in two modes: <i>shared</i> and <i>exclusive</i>. As the name suggests, <i>shared</i> mode allows multiple applications to output to (or capture from) an audio device at the same time, whereas <i>exclusive</i> mode does not.<br /><br />Almost all applications should open audio devices in shared mode. It would be quite disastrous if your YouTube videos played without sound because Spotify decided to open your speakers in exclusive mode.<br /><br />In shared mode, the audio engine has to resample and mix audio streams from all the applications that want to output to that device. This increases latency because it must maintain its own audio ringbuffer for doing all this, from which audio buffers will be periodically written out to the audio device.<br /><br />In theory, hardware mixing could be used if the sound card supports it, but very few sound cards implement that now since it's so cheap to do in software. On Windows, only high-end audio interfaces used for professional audio implement this.<br /><br />Another option is to allocate your audio engine buffers directly in the sound card's memory with DMA, but that complicates the implementation and relies on good drivers from hardware manufacturers. Microsoft has tried similar approaches in the past with DirectSound and been burned by it, so it's not a route they took with WASAPI<a href="http://blog.nirbheek.in/feeds/posts/default#ms-audio-history">³</a>.<br /><br />On the other hand, some applications know they will be the only ones using a device, and for them all this machinery is a hindrance. This is why <i>exclusive</i> mode exists. In this mode, if the audio driver is implemented correctly, the application's buffers will be directly written out to the sound card, which will yield the lowest possible latency.<br /><br /><h2 style="text-align: left;">Audio latency with WASAPI</h2><br />So what kind of latencies <i>can</i> we get with WASAPI?<br /><br />That depends on the <i>device period</i> that is being used. The term <i>device period</i> is a fancy way of saying <i>buffer size</i>; specifically the buffer size that is used in each call to your application that fetches audio data.<br /><br />This is the same period with which audio data will be written out to the actual device, so it is the major contributor of latency in the entire machinery.<i></i><br /><br />If you're using the <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/dd370865">AudioClient</a> interface in WASAPI to initialize your streams, the default period is 10ms. This means the theoretical <i>minimum</i> latency you can get in <i>shared mode</i> would be 10ms (audio engine) + 10ms (driver) = 20ms. In practice, it'll be somewhat higher due to various inefficiencies in the subsystem.<br /><br />When using <i>exclusive mode</i>, there's no engine latency, so the same number goes down to ~10ms.<br /><br />These numbers are decent for most use-cases, but like I explained in my <a href="http://blog.nirbheek.in/2018/03/latency-in-digital-audio.html">previous blog post</a>, this is totally insufficient for pro-audio use-cases such as applying live effects to music recordings. You really need latencies that are lower than 10ms there.<br /><br /><h2 style="text-align: left;">Ultra-low latency with WASAPI</h2><br />Starting with Windows 10, WASAPI removed most of its aforementioned inefficiencies, and introduced a new interface: <a href="https://msdn.microsoft.com/library/windows/desktop/dn911487">AudioClient3</a>. If you initialize your streams with this interface, and if your audio driver is implemented correctly, you can configure a device period of just <i>2.67ms</i> at 48KHz.<br /><br />The best part is that this is the period not just in exclusive mode but <i>also in shared mode</i>, which brings WASAPI almost at-par with JACK and CoreAudio <br /><br />So that was the good news. Did I mention there's bad news too? Well, now you know.<br /><br />The first bit is that these numbers are only achievable if you use Microsoft's implementation of the Intel HD Audio standard for consumer drivers. This is fine; you follow <a href="https://blogs.msdn.microsoft.com/matthew_van_eerde/2010/08/23/troubleshooting-how-to-install-the-microsoft-hd-audio-class-driver/">some badly-documented steps</a> and it turns out fine.<br /><br />Then you realize that if you want to use something more high-end than an Intel HD Audio sound card, unless you use <a href="http://www.motu.com/newsitems/windows-wave-rt-support-is-now-shipping">one of the rare</a> pro-audio interfaces that have drivers that use the new <a href="https://docs.microsoft.com/en-us/windows-hardware/drivers/audio/understanding-the-wavert-port-driver">WaveRT</a> driver model instead of the old <a href="https://msdn.microsoft.com/en-us/library/windows/hardware/ff538767">WaveCyclic</a> model, you still see 10ms device periods.<br /><br />It seems the pro-audio industry made the decision to stick with ASIO since it already provides &lt;5ms latency. They don't care that the API is proprietary, and that most applications can't actually use it because of that. All the apps that are used in the pro-audio world already work with it.<br /><br />The strange part is that all this information is nowhere on the Internet and seems to lie solely in the minds of the Windows audio driver cabals across the US and Europe. It's surprising and frustrating for someone used to working in the open to see such counterproductive information asymmetry, and <a href="https://github.com/kinetiknz/cubeb/issues/324">I'm not the only one</a>.<br /><br />This is where I plug open-source and talk about how Linux has had ultra-low latencies for years since all the audio drivers are open-source, follow the same <a href="https://www.kernel.org/doc/html/v4.10/sound/kernel-api/index.html">ALSA driver model</a><a href="http://blog.nirbheek.in/feeds/posts/default#alsa-kernel">⁴</a>, and are constantly improved. JACK is probably the most well-known low-latency audio engine in existence, and was born on Linux. People are even using Pulseaudio these days to work with &lt;5ms latencies.<br /><br />But this blog post is about Windows and WASAPI, so let's get back on track.<br /><br />To be fair, Microsoft is not to blame here. Decades ago they made the decision of not working more closely with the companies that write drivers for their standard hardware components, and they're still paying the price for it. Blue screens of death were the most user-visible consequences, but the current audio situation is an indication that losing control of your platform has more dire consequences.<br /><br />There is one more bit of bad news. In my testing, I wasn't able to get glitch-free <i>capture</i> of audio in the source element using the AudioClient3 interface at the minimum configurable latency in shared mode, even with <a href="https://cgit.freedesktop.org/gstreamer/gst-plugins-bad/tree/sys/wasapi/gstwasapiutil.c#n980">critical thread priorities</a> unless there was nothing else running on the machine.<br /><br />As a result, this feature is disabled by default on the source element. This is unfortunate, but not a great loss since the same device period is achievable in exclusive mode without glitches.<br /><br /><h2 style="text-align: left;">Measuring WASAPI latencies</h2><br />Now that we're back from our detour, the executive summary is that the GStreamer WASAPI <a href="https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad/html/gst-plugins-bad-plugins-wasapisrc.html#gst-plugins-bad-plugins-wasapisrc.description">source</a> and <a href="https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad/html/gst-plugins-bad-plugins-wasapisink.html#gst-plugins-bad-plugins-wasapisink.description">sink</a> elements now use the latest recommended WASAPI interfaces. You should test them out and see how well they work for you!<br /><br />By default, a device is opened in shared mode with a conservative latency setting. To force the stream into the lowest latency possible, set <i>low-latency=true</i>. If you're on Windows 10 and want to force-enable/disable the use of the AudioClient3 interface, toggle the <i>use-audioclient3</i> property.<br /><br />To open a device in exclusive mode, set <i>exclusive=true</i>. This will ignore the <i>low-latency</i> and <i>use-audioclient3</i> properties since they only apply to shared mode streams. When a device is opened in exclusive mode, the stream will always be configured for the lowest possible latency by WASAPI.<br /><br />To measure the actual latency in each configuration, you can use the new <a href="https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad/html/gst-plugins-bad-plugins-audiolatency.html#gst-plugins-bad-plugins-audiolatency.description">audiolatency</a> plugin that I wrote to get hard numbers for the total end-to-end latency including the latency added by the GStreamer audio ringbuffers in the source and sink elements, the WASAPI audio engine (capture and render), the audio driver, and so on.<br /><br />I look forward to hearing what your numbers are on Windows 7, 8.1, and 10 in all these configurations! ;)<br /><br /><a href="http://blog.nirbheek.in/feeds/posts/default" name="gst-plugins"></a><br /><span style="font-size: x-small;">1. The only ones missing are AAudio because it's very new and ASIO which is a proprietary API with licensing requirements.</span><br /><a href="http://blog.nirbheek.in/feeds/posts/default" name="gst-windows"></a><br /><span style="font-size: x-small;">2. It's no secret that although lots of people use GStreamer on Windows, the majority of GStreamer developers work on Linux and macOS. As a result the Windows plugins haven't always gotten a lot of love. It doesn't help that <a href="https://gstreamer.freedesktop.org/documentation/installing/building-from-source-using-cerbero.html">building GStreamer on Windows</a> can be a daunting task . This is actually one of the major reasons why we're moving to Meson, but I've already <a href="http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.html">written about that elsewhere</a>!</span><br /><a href="http://blog.nirbheek.in/feeds/posts/default" name="ms-audio-history"></a><br /><span style="font-size: x-small;">3. My knowledge about the history of the decisions behind the Windows Audio API is spotty, so corrections and expansions on this are most welcome!</span><br /><a href="http://blog.nirbheek.in/feeds/posts/default" name="alsa-kernel"></a><br /><span style="font-size: x-small;">4. The ALSA drivers in the Linux kernel should not be confused with the ALSA userspace library.</span></div></div>2018-03-24T00:39:34+00:00NirbheekKushal Das: Using Python to access Onion network over SOCKS proxyhttps://kushaldas.in/posts/using-python-to-access-onion-network-over-socks-proxy.html
<p><a href="https://www.torproject.org">Tor</a> provides a SOCKS proxy
so that you can have any application using the same to connect the Onion
network. The default port is 9050. The <a href="https://www.torproject.org/projects/torbrowser.html.en">Tor
Browser</a> also provides
the same service on port 9150. In this post, we will see how can we use the
same SOCKS proxy to access the Internet.</p>
<h3>Using Python requests module</h3>
<p>I used <a href="https://docs.pipenv.org/">pipenv</a> to install the dependencies.</p>
<pre><code>$ pipenv install
$ pipenv shell
$ pipenv install requests[socks]
Installing requests[socks]…
Collecting requests[socks]
Using cached requests-2.18.4-py2.py3-none-any.whl
Collecting chardet&lt;3.1.0,&gt;=3.0.2 (from requests[socks])
Using cached chardet-3.0.4-py2.py3-none-any.whl
Collecting urllib3&lt;1.23,&gt;=1.21.1 (from requests[socks])
Using cached urllib3-1.22-py2.py3-none-any.whl
Collecting idna&lt;2.7,&gt;=2.5 (from requests[socks])
Using cached idna-2.6-py2.py3-none-any.whl
Collecting certifi&gt;=2017.4.17 (from requests[socks])
Using cached certifi-2018.1.18-py2.py3-none-any.whl
Collecting PySocks!=1.5.7,&gt;=1.5.6; extra == "socks" (from requests[socks])
Using cached PySocks-1.6.8.tar.gz
Building wheels for collected packages: PySocks
Running setup.py bdist_wheel for PySocks: started
Running setup.py bdist_wheel for PySocks: finished with status 'done'
Stored in directory: /home/kdas/.cache/pip/wheels/77/f0/00/52f304b7dddcca8fca05ad1226382134ad50ba6c1662d7539e
Successfully built PySocks
Installing collected packages: chardet, urllib3, idna, certifi, PySocks, requests
Successfully installed PySocks-1.6.8 certifi-2018.1.18 chardet-3.0.4 idna-2.6 requests-2.18.4 urllib3-1.22
Adding requests[socks] to Pipfile's [packages]…
Pipfile.lock (711973) out of date, updating to (dcbf91)…
Locking [dev-packages] dependencies…
Locking [packages] dependencies…
Updated Pipfile.lock (dcbf91)!
Installing dependencies from Pipfile.lock (dcbf91)…
🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 6/6 — 00:00:01
</code></pre>
<p>After this, writing the actual code is very simple, we will be doing a <code>GET</code>
request to <a href="https://httpbin.org">https://httpbin.org</a> to find out our IP address.</p>
<pre><code class="language-Python">import requests
def main():
proxies = {
'http': 'socks5h://127.0.0.1:9050',
'https': 'socks5h://127.0.0.1:9050'
}
r = requests.get('https://httpbin.org/get', proxies=proxies)
print(r.text)
if __name__ == '__main__':
main()
</code></pre>
<p>If you see closely, you will find that I am using <strong>socks5h</strong> as the protocol,
instead of <em>socks5</em>. The request documentation mentions that using <em>socks5h</em>
will make sure that DNS resolution happens over the proxy instead of on the
client side.</p>
<p>The output of the code looks like below:</p>
<pre><code>$ python usesocks.py
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Connection": "close",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.18.4"
},
"origin": "137.74.169.241",
"url": "https://httpbin.org/get"
}
$ python usesocks.py
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Connection": "close",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.18.4"
},
"origin": "77.247.181.162",
"url": "https://httpbin.org/get"
}
</code></pre>
<p>Now, you can use the same code to access any standard webservice or access any
Onion address.</p>2018-03-23T18:52:00+00:00Kushal Das