I was telling Henry aboutan interesting use case of KStars a few days ago, and hesuggested that I blog about it.

I encountered this problem while preparing for a Practical Amateur Astronomy workshop that we were organizing. We had made lists ofvarious celestial objects for people to observe, along with somehand-written descriptions. We edited the lists collaboratively onGoogle Spreadsheets, and at some point I declared the lists final andmade a CSV export. I wanted the lists to be organized by constellationand also have some more vital information about the objects filled in.

Enter KStars and D-Bus. KStars has D-Bus interface functions that letyou access many of its features. I use qdbus to accessthem over the shell. (Note that the following is known to work onGNU/Linux. I am entirely unsure about Windows and Macplatforms). Here’s a brief example of making KStars point towards M33:

qdbus org.kde.kstars /KStars org.kde.kstars.lookTowards "M 33"

(Note: Due to some bug in KStars at the moment, you need to invoke theabove multiple times to get the object in the center)

Then, let’s say we want to query information on NGC 2903. We can do soby using:

One can now use tools like xmlstarlet to work with theXML on the command line.

There. That has all the information I need to complete thechecklists. So I went ahead and wrote a small shell script to orderthe objects by constellation and typeset a table using LaTeX. Theresults look like this:

Many more wonderful things are possible because of the D-Businterface. In fact, my Logbook projectrelies heavily on KStars’ D-Bus interface. The Logbook project usesKStars to produce amateur astronomers’ logbooks complete with fine andcoarse finder charts, relevant data and DSS imagery.

One can use qdbusviewer and qdbus to furtherexplore the available D-Bus methods in KStars and profit fromscripting using KStars.

Gory detail follows, each attraction roughly organized into one
paragraph, so that this can help anyone doing a trip to the Bay Area.

Note: The alternate text (“tooltips”) on the images act as captions, so hover on the image to see captions.

Day 1 — Arrival at Stanford

I was killing time all through Monday, so when Kumar came to drop me
off at the airport, I hadn’t slept. A night of careful preparation
ensured that I took everything I’d need.

Arriving at San Jose, I took a bus to get to Vimal’s place on the
Stanford University campus. After a brief discussion about TCP, Vimal
took me around the Stanford campus for a short walk, aiming to reach a
sandwich place called Ike’s. They make AWESOME, albeit slightly
expensive, sandwiches and anyone who visits Stanford must once eat
their sandwiches. It is clear from the long queues, that they are
really popular. Chinmoy Venkatesh soon arrived and we together went to
pick up Sathish Thiyagarajan and N G Srinivas (whom we often call
“NG”) from the Palo Alto Caltrain station. Nothing much ensued after
that, except some relaxation, looking around Stanford for a bit more,
clicking pictures and the like.

Day 2 — San Francisco

The next day started with a trip to San Francisco. Sathish went
ahead of us to meet his classmate in SF, while NG and I met Kishore at
Millbrae and then embarked to the Embarcadaro BART station. After
getting off on Market street, we walked to The Embarcadaro. The first
thing that caught our sight was the Bay Bridge, and Pier 14. We walked
along Pier 14 to click a few pictures with the Bay Bridge and then
walked along the Embarcadaro towards Pier 39, passing along various
piers.

At pier 39, we made our usual touristy purchase of a San Francisco
T-Shirt, and looked at the sea lions. We continued walking along the
bay, to Fisherman’s Wharf. Since we didn’t want to spend too much time
there, we didn’t take the world-war submarine (USS Pampanito) tour. I intend to do
that when I get the next opportunity.

We then joined Carlos and Sathish to get a view of the Golden Gate
Bridge. A bit of walking along the Golden Gate and a lots of pictures,
panoramas etc ensued. All those pictures are here. After
being satisfied with the view of the Golden Gate, we headed for lunch
at an Italian restaurant, which Carlos dropped us off at.

After some good Gnochii, we were on our way to the Ghirardelli square,
which presumably was the location of Ghirardelli’s first
chocolate-making “factory”. In there was a small exhibit of fresh
chocolate being made. I had a Coffee ice-cream and trust me, it was
very hard to finish although it looked small. The chocolate was just
awesome.

After a short walk, we reached Hyde street just before twilight was
fading, got a view of the Golden Gate bridge from Hyde Street, and
walked to the top of Lombard Street. Walking down Lombard Street was
very nice. We then took the historic tram ride along the F-line, back
to the Embarcadaro BART on Market and got back to Stanford.

Day 3 — Indian Food, Googleplex, and the Shoreline at Mountain
View

Day 3 began with South Indian traditional lunch at Komala Vilas,
Sunnyvale — something that is not available in Austin. The lunch that
day at Komala Vilas was awesome! We had “Vendakkai Morkuzhambu” (Okra curry
in a sour-yoghurt gravy with spices), “Keerai Koottu” (Gravy made out
of Spinach), Beetroot and Peas “Poriyal” (Fried curry with sauted Beet
and Peas), and Rasam (Tomato-based soup, usually mixed with rice and
eaten), followed by a Madras Filter Coffee. In the background, songs
from old Tamizh movies played. Suddenly, I felt transported back home.

We then left for Googleplex, Google’s campus between Charleston and
Amphitheatre Pkwy, in Mountain View. I’d written to Carol Smith from
the Open Source Programs Office, who directed me to Ellen Ko. These,
as is familiar to some of you, are the people that manage the GSoC
programme. Ellen was kind enough to take time off her schedule to show
us around the campus and get us some OSPO / GSoC Swag! We also got to
briefly look at the Open Source Programs Office. Thanks, Ellen, for
hosting us and taking time off your schedule for us! After exiting the
OSPO, we went back to the Android Park or whatever-it’s-called to get
some “canonical” snaps in front of the Android, the
Cupcake, the Donut, the Eclair, the Froyo, the Gingerbread, and the
Honeycomb.

We then walked across to the “Shoreline at Mountain View” Park. The
park was very nice, with views of the bay. The bright sunny day that
was Thursday made the park present to us its full beauty. The park is
beautiful and was certainly worth the long walk that we took to reach
it.

After the Shoreline Park, it was time for dinner. We went to Hotel
Saravana Bhavan. Having gotten quality South Indian snacks and food
after a long time, the dinner seemed heavenly! I sampled a variety of
dishes, including the Sambar Vada, Ghee Idly, Rava Kichadi, Masala
Dosa, Poori, Onion Uthappam, Gulab Jamun and Filter Coffee. I would
not recommend the Gulab Jamun. The Dosa was the Tamil Nadu style,
which does not appeal to me. The Dosa at Dosa, San Francisco was, IMO,
far far better in quality. We returned to Stanford after
dinner. Getting to Saravana Bhavan involved long walks, but it was
totally worth it.

Day 4 — Computer History Museum, Carribean dinner

Day 4 was bad planning, bad execution. While NG, Chinmoy and Kishore
went to an Indian food place called Vaigai in Sunnyvale, Sathish and I
went to the Computer History Museum. We were later joined by the
others, just before the museum closed.

Even the 1.5 hours I had in the museum was by far insufficient to
explore and enjoy the museum completely. The museum is a must-visit
for any computer geek, and I really regret that I could not fully see
it. In fact, I missed their prime exhibits — A real, working PDP-1
and a working Difference Engine 2 rebuilt from Charles Babbage’s
original design. They in fact do demos on weekends. Their website, I
must say, is very poor on demo information. More information would
have probably helped us plan better. I must revisit this museum
whenever I get the opportunity to do so.

However, let me stop ranting and talk about what I saw. They had
exhibits of slide rules, old calculators, punch card sorters and
processors, replica of the first IBM punch-card machine designed for a
census, exhibits on the history of IBM, displays of old
electromechanical machines to integrate differential equations,
displays of various op-amps including a two-valve package, exhibits of
old computers, Cray-1 and Cray-2 supercomputers, PDP-11, etc. I didn’t
have a lot of time to go through all that stuff.

The museum is certainly a visit. I’d recommend that you keep 4 ~ 5
hours to look around the museum. This entails that you should arrive
at the museum at least as early as 12 noon ~ 1 PM, since the museum
closes at 5 PM.

On Sunday, they had a demo of the Difference Engine 2.0 at 1 PM (This
information, for instance, could not be located on their website. One
was required to call the front-desk in the morning to figure out.),
but both me and my friends missed that, since we purposefully laid
more emphasis on spending time with S V Vikram in our planning.

We had dinner at a Carribean food place in Palo Alto. We met Vibhav
over dinner. This was the first time I had Carribean food, and it was
a new experience.

On Day 5, Ananth was free and Vimal decided to take a day off from
work. So Ananth booked a Zipcar and NG, Vimal, Sathish, and I joined
him on a long drive. First, we went to Twin Peaks. When we arrived, it
was all foggy, but soon, the fog cleared and we were able to get a
beautiful view of San Francisco downtown, the Bay bridge and the bay
beyond the bay bridge (To the South, I think). Sathish, NG and I
walked down a rather steep pathway to get better photographs. The
Market street, and LGBT colors at The Castro in San Francisco were
clearly visible. Market Street looked very nice, leading to the
downtown. The Golden Gate bridge was mostly obscured by fog. (Photoshere.)
An unintentional stretch on the Pacific Highway (California-1) as we
drove to Twin Peaks already gave us a glimpse of the natural beauty
that it had to offer! As Sathish put it, we had a “Roadrash
moment”. (More later.)

After enjoying the views from the Twin Peaks, we headed to have
Dosa (South Indian crépe) at this place that was aptly titled Dosa in
SF. I got the Habanero-Mango Masala Dosa, owing to its closeness to
the Mysore Masala Dosa, which turned out to be a regrettable
decision. Heed their warning — it is EXTREMELY spicy — even for a
true Indian — possibly even for someone from Andhra Pradesh! Maybe
it’s the spicier than what a Mexican can handle too. Although that
Dosa was extremely tasty and delicious, it was equally spicy and I
regret having eaten it, for my stomach was terribly worried by the
large gulps of water I had to take between bites of Dosa. Their Onion
Rava Dosa was authentic, indeed. If only the Habanero Dosa was less
spicy (maybe they can custom-make it?) it would have been one of the
most satisfying, excellent Dosas I’d have ever eaten; much better than
(what was, in comparison,) the parody that Saravana Bhavan could
produce.

Remember that old game of the 1990s, from a then-young Electronic
Arts, called Road Rash? Almost everyone I know from my generation has
played it at some point of time — including a non-gamer like me. I
remember that Pacific Highway was the most beautiful of the routes in
that game. As we witnessed shortly later, photographs, let alone the
game’s graphics, can hardly capture the scenic land / seascapes of
that highway!

We drove on CA-1 from San Francisco to Santa Cruz. Unfortunately a
section of the highway providing access to the beautiful “Big Sur”
section of the highway had collapsed. We were also unaware that one
could still access it, albeit with difficulty, from the Southern side
— that might have changed our decision to see the SF-Santa Cruz
stretch instead.

Nevertheless, the SF to Santa Cruz stretch was enough to make this
the best part of our trip. That highway is not ranked as the most
scenic in the U.S. for no reason. Indeed, in future, after we obtain
our drivers’ licenses, we all (NG, Sat, me) decided that we should
definitely drive on this highway.

As we started off south from SF, there was the undulating road with
the beautiful Pacific ocean on our right. In no time, the highway
quickly transported us from a coastal view to one of a hill road
through a forest, and then again, presented a view with bare mountain
on one side and a beach on the other.

We got off at an anonymous beach and explored the beauty of the
place. The Pacific was desperate to drench our shoes and feet, or from
a more optimistic perspective, give us a royal welcome by washing our
feet (as is done, at least in India). More photography ensued. After a
while, we embarked further on our journey. Now, the Pacific Highway
took the form of something more like a Texan highway in spring — flat
land and green pastures on either side. Very soon, the highway morphed
into undulating green terrain on the left and the blue Pacific ocean
on the right.

Soon, we arrived at a beautiful bay — some tall-ish rocks on one
side, a steep precipice on the other, with an interesting shape that
seemed to have been a result of erosion and land-slides. And down
there, was the blue pacific, with waves sprawling on the beach below
the bay. I certainly failed in my attempt to capture this beauty
through my lenses. That’s something that cannot be seen in photos, but
needs to be experienced in 3D, along with the strong wind. It could
have been a little sunnier and warmer, though.

That was followed by more scenic valleys, undulating terrain, and
views of the ocean. I haven’t seen a terrain that so smoothly morphs
from one kind to the other. You’ve traversed all sorts of landforms —
beaches, valleys, hills, plains, woods — as you traverse the
highway.

Following our drive through the Pacific Highway, we reached Santa
Cruz. We grabbed a bit to eat and drink. We abandoned the boardwalk
since it was a bit too cold to do that. Ananth and I had nearly no
protection, and we’d anyway seen enough of the sea. NG decided to go
and have a look for a brief while. The rest of us stayed indoors.

We drove back on another scenic highway (17) to Sunnyvale, but
since I was feeling sick, I chose to sleep. After some shopping at
Sunnyvale, we headed back to Stanford. While I stayed back at Vimal’s
and Ananth’s place, they headed to pick up some fresh icecream from
Coldstone. That Coffee ice cream was heavenly, and I downed a lot of
it till my throat became a bit unhappy and I’d had too much.

Whoa! That was an awesome day! Would have been even more awesome if
I hadn’t been sick after eating at Santa Cruz.

Day 6 — Meeting more people and return to Austin

After sitting up late into the night and trying to figure out an ideal
meeting time and plan for the next day, I concluded that it was best
to abandon plans of revisiting the Computer History Museum with S V
Vikram, and instead have lunch at Komala Vilas together.

NG, Vikram and Sathish and I arrived almost simultaneously at El
Camino and Poplar Ave, and marched off to Komala Vilas after
exchanging excited greetings! The food at Komala Vilas was
significantly substandard today. The gourd curry and the Koottu
(gravy) were not at all upto my expectations. The Potato Poriyal
(fried curry) was pretty good, though. NG and I had some good coffee
after that.

I didn’t have enough time to visit the Computer History Museum. I
decided to instead head toward San Jose Airport. After joining SVV, NG
and Sathish on the bus #22 back to San Antonio Shopping Center and
buying some food at Walmart, I waited till the others got onto bus #40
to Mountain View to see the Computer History Museum and show Vikram
the Googleplex.

Then I called Purnateja, whom I’ve known for many years, and asked
if we could meet at Sunnyvale as I passed through El Camino towards
Santa Clara transit center. I also sent Sujith Haridasan (KDE
contributor working on Plasma) a message, but he was too busy to make
it to a brief meetup at Santa Clara. Meeting Purnateja after several
years was exciting. We barely had time to grab a coffee and I had to
headback to get onto my return flight on time.

I arrived at the airport well in time for my flight, and after some
general banter on the phone with Kumar, boarded the aircraft. It was
nice to see a lot of Burnt Orange (that’s the UT Austin / Longhorns
color) around the pre-boarding area again after quite some while! As I
type this, I’m on the flight. My Thinkpad is clocking excellent
battery life after I shut off networking, bluetooth and put it on
Agressive powersave mode. Typing this blogpost is pretty much what
I’ve been doing all through the flight so far. I’m looking forward to
meeting Kumar at the Austin airport and get back to my home away from
home.

Finally, I felt I had something about Physics that I wanted to write about. The i \epsilon terms sitting in the propagator of a QFT, in the Lippmann-Schwinger equation and in Chapter 4 of Peskin and Schroeder have been bothering a couple students including me at the department for a while now. I am not qualified enough in Quantum Field Theory to make any serious comments on this, but I just had some thoughts regarding the i \epsilon. They may be wrong, and I request readers to correct me if there are mistakes, or if they have something to add to this.

At first look, the i \epsilon looked like some bizzare mathematical trick, put in by hand, to give meaning to integrals. “Oh, this integral diverges, but we want it to converge, so we just throw in an i \epsilon”. A lot of us were pretty dissatisfied with this. Also, there was this question too — there are these i \epsilon terms in (a) the propagator, (b) in the Lippmann-Schwinger equation, (c) in Peskin-Schroeder’s derivation relating the interacting ground state with the free-theory ground state, and (d) in the derivation of the path integral formalism from canonical QM — are they all stuck there for the same purpose?

The first time i \epsilon bothered us was in Peskin-Schroeder’s derivation of a relation between the free-field ground state and the interacting-field ground state, where he says “let us take time to infinity in a slightly imaginary direction”. Now, the question was, why should time become imaginary? A long argument on VoIP with Naveen Sharma was adjourned with this: “The T \to \infty( 1 + i \epsilon) is a mathematical trick to supress the contribution of all other states and solve for the interacting ground state in terms of the free-field ground state.”

Then came Prof. Weinberg’s notes on the Lippmann-Schwinger equation. As he explained in class, and as was explained in his notes, the right choice of ± i \epsilon in the Lippmann-Schwinger Green’s function fixes whether we are choosing in-states or out-states. i.e. states with the +i \epsilon in the Green’s function’s denominator satisfy the condition that they look like free particles in the asymptotic past, while states with the -i \epsilon look like free particles in the asymptotic future. A similar argument, with a bit more detail, is presented in his book “The Quantum Theory of Fields” volume 1, in Chapter 3. He also has made a reference to B. A. Lippmann and J. Schwinger, Phys. Rev. Vol 79, No. 3 (1950)..

So I briefly looked at the Lippmann-Schwinger paper, where they actually derive the equation. Then they make a comment: “simulating the cessation of interaction, arising from the separation of component parts of the system, by an adiabatic decrease in the interaction strength as t → ± \infty. The latter can be represented by a factor exp(-\epsilon |t|/ħ) where \epsilon is arbitrarily small.” Aha! So that epsilon came from an adiabatic (slow) decrease of interaction strength! But why are we forced to kill that interaction “by hand”? [PS: Loophole — I still don’t know the adiabatic theorem] I don’t know enough, but I’d ordinarily expect a “factor killing the interaction” to sit in the interaction Hamiltonian rather than outside it (see eqn 1.51 of the Lippmann-Schwinger paper).

At least now, the \epsilon factor had some physical meaning — it came from the adiabatic killing of the interaction, rather than being just some “pole-pushing technology”.

More came today. There is the same epsilon in the Fourier transform of a \theta function (Heaviside step function). One may write:
This is something that I was supposed to know from Electrical Engineering, but we used to “throw away” the epsilon — it didn’t matter much there I guess. Really, it’s just the Fourier transform of a decaying exponential (which every electrical engineer, from IIT Madras at least, would know) with the characteristic length taken to infinity. And then, today we worked out the Feynman propagator for the scalar field. I should’ve known this long back, but I learned it today, that really, the epsilon in the propagator comes from the \theta function’s Fourier transform.

So it seems like the epsilons — at least in (a), (b) and (c) — are present to impose causality.

And then, I learned something more today: An i\epsilon is going to make the Hamiltonian non-Hermitian (eg: See the Green’s function in the Lippmann-Schwinger: it’s effectively adding a small non-Hermitian component to the Hamiltonian). And we see that Hermiticity of the Hamiltonian is required for time-reversal symmetry:

Thus, if my logic is right, the i \epsilon is necessary to break the time-reversal-invariance in the system so that we can talk about an “in” state and distinguish that from an “out” state. Of course, this is unphysical in most situations as far as we know, so we do away with the \epsilon at the end.

Now, this brings me to a couple of questions:

Does that mean the weak interaction has a non-Hermitian Lagrangian density? [Need to check; sounds like a No]

We’re always time ordering in quantum mechanics. A naïve look gives me the impression that time ordering breaks time-reversal invariance. Then why are our theories time-invariant?

Couple of suggestions. I think of the i\epsilon in the propogator as something that results due to the laplace transform of the propogator. You cannot take a fourier transform it is not absolutely integrable no matter what. Another the hamiltonian needs to be hermitian no matter what, since your energy eigen values can’t be complex. The time reversal symmetry is true if your Hamiltonian is real. This is why this property is lost when we have the vector potential corresponding to the B field (though the Hamiltonian is hermitian).

>he i \epsilon is necessary to break the time-reversal-invariance in the system so that we can talk >about an “in” state and distinguish that from an “out” state
Seriously doubt this conclusion. I don’t get how you are connecting up the propagator to the Schwinger Lippmann equation. I would think that they are two independent statements.

Certainly, Hermitianity and time-reversal invariance have a relation. And the ± i \epsilon is what distinguish in states from out states. Hence I drew this conclusion, but yes, it could be wrong.

But the Lippmann-Schwinger equation is the formal expression of a solution to the “inhomogeneous” (with a potential) Schrödinger equation. If D is the operator (p^2 / 2M – E), then the “free” Schrödinger equation is D\psi = 0. But instead, suppose you want to solve D\psi = – V\psi, Then \psi looks like,

\psi = \psi_{free} – D^{-1} V \psi

which is the Lippmann-Schwinger equation, except that D^{-1} isn’t well defined. So you put in a ± i \epsilon in D^{-1} to match the condition that \psi must look like \psi_{free} as t → ± \infty. So really, it’s just the Green’s function for the Schrödinger equation, IMO.

I remember the time when I knew how to write code / fix bugs, but had no clue about build systems, version control or anything like that. And of course I didn’t know how to write clean code. I was trying to make a KStars-like program in MS DOS, “stealing” KStars’ data, but I didn’t know how to do something, so I looked up KStars’ code. It was so much neater than my horrible code. Finally, I gave up on my DOS program and I wrote a mail to Jason Harris, the author of KStars, to say I wanted to help. He pointed me to the docs on building KDE4 on techbase.

I got stuck a couple of times while building some of the dependencies. I sent a couple of emails to Jason Harris, showing the output from the failed build, but of course, Jason couldn’t keep helping me all the time. The lack of a working build, in fact, caused me to give up for a while. It took attending a KDE Project Day at FOSS.IN 2007 to bring me back.

Today, I saw a mockup of a “buildhelper” tool that the OpenHatch project is working on making. (If you haven’t heard of OpenHatch, here’s a sentence from the about page: “We believe that our community loses tons of prospective members because learning how and where you can fit in is difficult.”)

Each step has the basic name/description of a step, a more detailed explanation (if necessary), a link (if necessary, e.g. to the relevant git repository), an estimate of how long the step should take, a help button that links to either a relevant OpenHatch mission or other background-info tutorial that a project favors (optional), and a “crap, it didn’t work and I don’t understand why” button that takes the user to the project’s IRC channel (the dev channel specifically, if the project has more than one), where they can solicit help from more experienced project volunteers and point to the specific step where things went wrong.

I think in my context, the IRC link would’ve saved me.

The good thing about KDE is that there is a lot of build documentation already. But it would be awesome to make it interactive, and made clear, step-by-step.

OpenHatch is an open source project, so it’ll be ready when it’s ready. But Asheesh said on IRC that they’re expecting to release this feature by end April, and they’re going to have templates which various projects can fill, to make the tool specific to their buildsystem.

It would be awesome if KDE participated. I can see that it would benefit the several potential contributors who know some C / C++ from high school etc, but have never worked with a large project that requires a build system.

Well, this was a little hurried and late, but I introduced some largish changes into what’s going to become RC2 just a few minutes back. No new strings, to the best of my knowledge. Let me explain what they are.

Harry de Valence worked with KStars this summer and his (GSoC) project involved writing code that would render the sky map with OpenGL, while still keeping the native QPainter drawing, to ensure that KStars doesn’t crawl on older hardware without hardware acceleration. Well, Harry’s task was rather tough, because he had to deal with badly structured code, but he was able to give us a working OpenGL sky map with a few regressions at the end of his project. The most important unsolved problem, however, was that there was no way to switch between the two paint engines at run time.

We decided not to leave the code in Harry’s branch, and merged it into trunk, so that the code would be more accessible, and we might arrive at a solution at least before release. When 4.6 was branched out from trunk, we had a version of KStars that would throw up a black screen with some text at the corners upon launching! This was one of the regressions with OpenGL — you couldn’t use infoboxes (those boxes that tell you what KStars is looking at, what the time of the simulation is etc.) with GL, so you had to manually disable them to see the sky map. And to add to that, we had a couple of other problems with GL. And you couldn’t switch out of GL without rebuilding KStars.

Of course, one of the solutions was to just disable GL and fallback to QPainter, without adding any new functionality, but lots of new code. I didn’t want to subscribe to this — it gives me this feeling that I’m doing what some proprietary software companies do to enforce DRM.

Our solution to this problem involved breaking up a class that’s very central to KStars — the SkyMap class, which took care of the sky map in totality — painting, projection, events. Not good. Harry had already broken away projection, and in the process of cleaning the code, solved several really strange bugs in KStars. What was left was to break up painting and event handling, so that we could change our canvas at run time.

This time of the year being the winter break at UT Austin, I thought I should finish this. My implementation of this (warning: physics student trying to write code) is now merged into the 4.6 branch. I should’ve probably done it before, but yeah, it stretched to fit the time before RC2.

The only regression I observe with this change over the usual native painting functionality, is that infoboxes flicker when you drag them around. I’m sure that’s a very small thing to trade for the awesome speed and smoothness that GL gives you, when you need it.

So you can now switch backends at runtime:

Switching to OpenGL backends at runtime!

Testing

We most certainly need a whole lot of testing. Please help the KStars team by seeing if it builds, renders fine, switches backends with no issues, and file any bugs you may have on bugs.kde.org. We will try to fix them before the final tag.

I just built kstars from trunk and gave it a try with fglrx driver. I am sorry to say that it does not work at all :-( I only see a white screen and when I start to move the scene I get lot’s of flickering with visible stars in between.

If i remember it was reported that it worked fined with mesa’s software rendering. I have no way of testing that easily before the final version is out. Wouldn’t it indicate that the problem is in nouveau? As for the fglrx driver, do you have any clue where the problem might be?

Oh, didn’t realize that. I hardly twiddle with the hidden objects settings :)
If you built with debugging, then you’ll see the FPS numbers being written to the console. Maybe we should do something more elegant.

Context

I feared that my 2.5-year old Dell Insprion 1525n (yes, it came with no Windows!) was growing weak with "age" (effective age = age * roughness of handling), and therefore, I decided to make use of Thanksgiving deals to get a Lenovo ThinkPad Edge 14".

My Hell Perspiron (as I nickname it) gets as hot as hell and shuts off with the slightest processor load. Plus, the SATA hard-disk is showing signs of impending gradual failure. So I think it was a good decision anyway.

First Looks

From what I hear, this laptop is not really a *ThinkPad* (as in a T-series ThinkPad), but is a ThinkPad nevertheless ;-) — that’s enough.

So let’s see. I paid $640 + $50 tax + $0 shipping for it instead of the projected price of $1100+ and the "usual" total price of $860. It came via UPS, free shipping.

Unlike stuff I read online, my laptop doesn’t have a glossy back — no fingerprints etc. I’m not very bothered about the TrackPoint. It kind-of does get into the way, but not much. The keyboard design and feel is extremely good. It feels very nice typing on it.

However, by default, one needs to hold down the ‘Fn’ key to input F1 thru F12! Without Fn depressed, these correspond by default to mute, change volume, brightness etc. I was really frustrated by this, but a little Googling found me a solution (mentioned later). There’s another thing I do not like: which is that Ctrl and Fn are flipped across from their positions in Dell (and I think most other laptops). But this is a feature of all ThinkPads, it seems. Thankfully, Lenovo has some very nice BIOS options that let you configure these behaviors.

Installing Debian

Booting the installer
During first boot, after randomly answer the Windows configuration questions, it detected my WiFi network and connected. I learned from my friend Kumar Appaiah about UNetBootIn. I had originally planned to follow an article that a couple of us compiled this wiki page. But I gave UNetBootIn a try, and it failed. However, it installed WinGRUB successfully. The kernel refused to load, saying "Invalid file format" or something to that effect. So I got back to Windows and obtained the kernel and initrd.gz for the Debian installer from IITM’s FTP server and booted into the installer as outlined on the wiki page.

Partitioning
Kumar recommended that I try LVM. So I created a non-LVM physical /boot partition (required), and an LVM physical volume, that I split into several logical volumes. I also left 5 GB in a non-LVM physical partition, just in case. I deleted ThinkPad’s boot drive, which might have been a bad idea :-S.

Installing Packages
I used the default mirror in the US: http://ftp.us.debian.org. It turns out that the U of Texas mirror (ftp.utexas.edu) is much faster even when I’m at home.

Post Install
Post install, Debian booted into a command-line. It took me a little work to get basic stuff setup (bash completion etc.) and then I installed KDE (aptitude install kde-standard) and booted into it. It turns out that testing now has KDE 4.4.5.

The graphics card is not a fancy NVidia, and is at least not an immediate concern. It should work out of the box though. At least I see a graphical display :)

WiFi did not work out-of-the-box. I have an RTL8191SEvB controller, as indicated above (AFAIK, not all ThinkPad Edge 14s have the same). A little Googling pointed me to a blogpost, which pointed me to the RTL8191 drivers on RealTek’s page. I like to use wpa_supplicant, because I’m comfortable with that. So I used wpa_passphrase to generate the configuration for wpa_supplicant, and put that into /etc/wpa_supplicant.conf (created the file). Then, got rid of the network-manager service and ran wpa_supplicant:

Changing the behavior of Fn key
A little more Google told me that I could set the behavior of ‘Fn’ keys and swap the ‘Ctrl’ and ‘Fn’ key positions with the BIOS configuration utility. I rebooted, hit ‘Enter’ to get to the BIOS, (Fn +) F1 to edit the BIOS configuration, and went to ‘Keyboard’ to find these relieving options. Changed the behavior of F1…F12 to ‘Legacy’, and swapped the Ctrl and Fn keys. I’m now comfortable!

Touchpad

The touchpad, on Linux, works just the way I want it to by default — no tap to click; vertical scroll by sliding your finger on the right edge of the touchpad (in Windows, you had to use multitouch by default to do this, which I don’t like).

First Impressions, Summarized

So far, I think the Lenovo ThinkPad Edge 14" is a very nice laptop. No complaints at all — it looks a lot sturdier than a Dell Inspiron (like the rest of the ThinkPads), has a matte finish, I could work around my complaints with the keyboard, getting WiFi working wasn’t as bothersome as it usually is etc. The only thing I didn’t like, is that it came with Windows 7 installed and an ugly sticker that proclaims the same.

Thanks to the efforts of Henry de Valence this summer, KStars can now use OpenGL to render the sky map much faster than before on good graphics hardware. Today, after many mistakes, the merger finally succeeded and trunk now has OpenGL support.

A lot of functionality is still broken in the OpenGL version, and we hope we can fix it before KDE SC 4.6 is tagged. If you have any development skill, or experience fiddling around with Qt’s OpenGL framework and have some time to spare, this is a good time to help us out. Also, bug reports will be appreciated a lot.

But there’s something that I’ve been thinking a bit about these days — should KStars continue? I wonder if a lot of people actually use KStars. The one reason I like KStars is that although it’s not flashy and graphically appealing or very beginner-friendly, I think it does a very good job of catering to the keen armchair astronomer or educator. It has tools that popular software like Stellarium probably don’t have. At the same time, there are better software which work very well for the advanced amateur astronomer like Skychart, that may be difficult to use and not at all flashy, but do an amazing job. With Stellarium being such a popular, and awesome program (I really like a lot of things about Stellarium. We’ve frequently picked up ideas from Stellarium etc.), I wonder if there really is reason to further the development of KStars. I’m not the maintainer, so I really have no right to comment, but these are just my thoughts. Other astronomy software seem to have a good deal of developer power, that they are using to dash ahead, while KStars hasn’t accrued a lot of features in the recent past. Enough lamentation, there are some things that KStars does really well. I really like the outcome of last year’s GSoC by Prakash Mohan — the Observation Planner. I think we have one of the better open source observation planners around. I got an opportunity to actually use it on the field once, and while I still find a lot of scope for improvement, I think it’s one of the best tools I’ve ever used.

Actually I don’t own any telescope or stuff like that, so I really am not the kind of user who can judge kstar as an observation tool. Nonetheless I love kstars because it’s an easy to use informational/educational tool, I use it very often during summertime, and even in rest of the year I launch it at lease once a week.

I’m evaluating the chance to purchase an entry level telescope next summer, and I’m already taking a look at what the market offers, basing my choice on hardware supported by kstars, maybe it’s not the best way for choosing it, but kstars has been my successful way for exploring the sky on pc for too long a time, for even thinking quitting it because it’s not the absolute best.

Thank you very much to you and all other kstars developers, and please don’t stop working on it! :-)

It has tools that popular software like Stellarium probably don’t have.

You have no idea. :) Stellarium’s main point is being shiny. For some time, it was sponsored by a company that used it in planetarium projectors an as a result, the desktop features suffered. There is room for a lot of improvement, but it still is a “star show”, not a “star chart” program.

Skychart/Cartes du Ciel is a “sky chart” program that can cater better to amateur astronomers, but as you said, it is more powerful, but less user friendly. Being powerful, but user-friendly is a niche that KStars can strive to occupy. :)

Other astronomy software seem to have a good deal of developer power, that they are using to dash ahead…

I hope you don’t put Stellarium in that category. We are a handful of people – the lead developer was the only constant, most others seem to appear/disappear sporadically. I’ve been rather active recently, but that’s because I had too much free time (and that’s going to change soon).

I use KStars every once in a while to check what I’m seeing in the night sky at various times. I think it could be a very useful learning tool in an educational setting as a starting point for teaching concepts in astronomy. Do you know if it is used in schools anywhere?

I use both Stellarium (with Xfce on an older laptop) and KStars (on my primary laptop). Both programs have niches they each fill nicely in a true ecosphere of choice, which is what floss is all about. After all, you never know where or when feature cross-pollination occurs or what program might inspire some new programmer to try their hand at an ongoing large project.

I’m not an astronomer myself, but I do like to check on the location of particular stars or planets periodically (the latter more often), so I would definitely not want to see kstars disappear.

One issue, I think, is cross-platform capabilities. Although KDE ostensibly runs on windows, it really isn’t in a state where I could recommend any KDE app to anyone. I am not familiar with the Mac port, but last I heard there was very little happening with it. Once proper cross-desktop support, at least for windows, has been available for a while I think we will be in a better position to judge how kstars stacks up to the competition. But in the meantime I don’t think such comparisons are valid.

I’m not into any kind of astronomy whatsoever, but I’m all for anything that helps make education visual and interactive. I hear people working in schools using Kstars as a good showpiece for KDE Edu and KDE itself, which sounds like value to me.

Well, I must say I regret about being skeptical about KStars. I just figured that most people tend to mix and use astronomy software, each best for its own use case. And I didn’t know Stellarium shares our state as far as development is concerned. I hope I can submit some patches there too.

A quick update on what’s been happening deep down the bunkers of KStars.

GSoC student Harry de Valence has been very successful in getting a working OpenGL-rendered skymap. While we still don’t use GL goodness for things like atmosphere simulation, light-pollution simulation, cool animated graphics as yet at the end of this GSoC, the GL version is much faster. In the process, Harry had to clean up a lot of old KStars code — he has a shiny new class to handle sky map projections in a clean manner, for instance. This has solved a good deal of really annoying bugs that were plaguing KStars since quite long. You could check out his branch at branches/kstars/hdevalence if you’re interested.

Our other GSoC student Victor Carbune, who had to resign officially from the program half way through since he was offered an internship at Google (Yay!), still plans to continue contribution to KStars. So far, he has ported our deep-sky object data into a neat relational database and unleashed flexible search power to the user. Prakash Mohan (his mentor) and Victor are planning the course of the merger. His branch lies at branches/kstars/carbonix

New in trunk, is a feature called the Moon phase almanac. To access this, fire up the KStars Calculator (Ctrl+C) and you’ll find it under “Solar System”. It shows the moon phases for an entire month, so that you could plan your observation schedule easily. For the uninitiated, those who like to see the moon through a telescope would probably choose to observe on a day close to the half moon, and those who like to observe deep-sky objects would pick a date close to the new moon. The idea was to make something similar to http://stardate.org/nightsky/moon/ available off-line, within KStars. I hope the feature will pick up a few improvements as we go on, which will better integrate this with other features of KStars.

The implementation is a hack, using KDateTable, with a rewrite of KDatePicker’s features around it, since it seems to me that there’s no other way around this at the moment. At the moment, only the Gregorian Calendar is supported because of this. KDateTable is subclassed and KDateTable::paintCell is overrided. There’s a GenericCalendarWidget located in kstars/widgets, which accepts a custom KDateTable subclass and attempts to draw some month / year controls on top of it, and that’s what is used in this feature. It would be really nice if KDatePicker allowed me access to its internal KDateTable and let me replace that with my own KDateTable, but that’s not allowed at the moment.

We have Harry de Valence (hdevalence) working on OpenGL rendering in KStars, and Victor Carbune (carbonix) working on improving astronomical catalog support and social semantic features for KStars this summer.

OpenGL rendering will enable KStars to have far better aesthetic appeal than it currently does. Of course, we cannot raise the bar on hardware requirements just for aesthetic appeal, so Harry plans to ensure that the user will be able to switch between intensive graphics that requires OpenGL, or the existing graphics that are painted using QPainter.

Victor’s project is rather broad. KStars uses data from astronomical catalogs, which is currently stored in flat text files, to determine where to paint a celestial object (like a galaxy), or how bright the object is. As a first, he will get KStars to use SQLite to store and retrieve this data. Then, he plans to explore a different way of storing and reading star catalogs (which demand a lot of optimization!) which will be more extensible than our current implementation. After that, he plans to look into adding some features to KStars that promote community integration amongst amateur astronomers.

Hopefully, at the end of these two GSoCs, KStars will not only look a lot better, but will be a lot more friendly to the hobby astronomer.