Gallery

Everything posted by John Lokanis

QUOTE (Aristos Queue @ Aug 31 2008, 10:52 PM)
I could zip up the code and send it to them, but they would not be able to run it. The code relies on a continuous flow of data between an in-house SQL server and the application. I suppose they could inspect it, but that would be about it.
Also, since I only see this error from the EXE version of the code and only after 100's of units are tested (many days of testing), I suspect it would be nearly impossible. My only hope now is to have an NI engineer visit our site and see the code in action or to find the bug myself. It would be great if I could find something I screwed up that is causing this, but I can't think of a single thing that could. I wish I knew of another condition that could cause the refs to become invalid...
Regarding the launcher, no, that part of the app continues to run even after the error. It displays the status of each of the spawned VITs and allows you to view their FP via another VI with a sub panel. I am able to interact with it even after the error is reported. Just not the VIT that reported the error.

QUOTE (Val Brown @ Aug 29 2008, 12:55 AM)
FWIW, the Database Connectivity Toolkit is just a wrapper around ADO.NET. And not a very good one at that. It is extremely slow when working with large record sets.
If you truely want to be native, then write your own DB VIs calling ADO.NET yourself. If you use some of the tricks from Brian Tyler's old blog, you can get a 10x+ speed improvement.
-John

QUOTE (Aristos Queue @ Aug 29 2008, 08:25 PM)
Well, I do not allocate millions of queues, but I do allocate 1000's over the course of running the app.
My concern is this: I spawn many instances of a VIT to run a set of tests on a product. Each of these instances is composed completely of reentrant VIs (so they will not block each other). These reentrant VIs are all of the 'shared clone' type. They create the unnamed queues and pass them around to move data between parallel portions of the program. The only code in the entire app that can kill these queues is in the cleanup VI that is forced by dataflow to be the last thing executed by this spawned (from the VIT) vi.
Now, the launcher that spawns these VITs sets the spawned VI to Autoclose reference. So, the launcher is not responsible for dealing with this reference. When the spawned VI finishes execution, it will leave memory, as will all of its queues, notifiers, etc.
So what is confusing to me is if each spawned VIT creates its own queues (in sub VIs) and then listens to the queues (in other sub VIs), and the only code that can destroy those queue refs is also in a sub-VI of that VIT that is forced to execute last by dataflow, how could I ever get the error "Refnum became invalid while node waited for it.". Even if the VIT was stopped by an external VI, this error would never happen and the code that logs the error to the event log would also not execute. So, something is stepping on my queue refs. If it was memory corruption, then what could be causing it? When I see this, my app is using about 100MB. The machine has 4GB of RAM and no other apps are running.
I suspect that the 'shared clone' reentrant mode and queue refs have some latent bug.

QUOTE (jdunham @ Aug 29 2008, 04:07 PM)
I am only creating the queue in one place and destroying it in another. I do not 'obtain' an existing quene anywhere because I am using unnamed queues. I just pass the queue reference to the VIs that need it.
I do use force destroy, however. Maybe I should stop doing that, even though in this case it should not matter.
QUOTE (Aristos Queue @ Aug 29 2008, 04:18 PM)
We don't use a true GUID. We use a fixed count for the first several bits and a random value for the last few. In order to get any recycling of the unnamed queue IDs you would not only have to generate roughly 30 million queues, you would also need to get particularly (un)lucky on the other bits. That seems unlikely.
What about memory corruption? I notice that when this problem occurs, the whole app also starts to slow down AND memory usage starts to increase.
BTW: This problem only happens in the EXE deployed to a target machine and only after running for several days. So, I really have no way to debug it with breakpoints or anything. At least I log the errors to the event handler...

QUOTE (PJM_labview @ Aug 29 2008, 02:56 PM)
Yes, if I was creating the queue that way, it would be a problem. But I am not.
This is a extremely simplified version of my code. Each of these functions is actually buried in several sub-VIs. And all of them are reentrant (shared clones). The top VI (this VI) is a template that gets spawned many times. Also, I have over 15 queues, not just the one shown here. There are no functional globals or dynamic calls to anything that creates the queue refs. Everything is tied together by wires, just as you see it here.
http://lavag.org/old_files/monthly_08_2008/post-2411-1220050049.jpg' target="_blank">
The Dequeue element gets an error stating the reference has become invalid while waiting. How could this happen??

Thanks for the reply. That is definitely a way to cause a queue to be deallocated. In my case, however, I don't think that is possible. The structure of my code has a main vi that calls a sub VI to create the queue and then passes the queue ref to another sub VI that listens to the queue. When the listener quits, it passes its error cluster to the sub VI that destroys the queue. Since all of these VIs are part of the main VI, i don't see how it is possible that the queue reference would be automatically removed from memory. The VI that get the error is running as a sub VI of the same VI that called the VI that created the queue.
The interesting thing is everything seems to work well for a long time and then it all goes to heck. As you can see from the error, the 'main.vi' has been spawned from a template 422 times and the reentrant subVI that got the error is one of 34 in memory right now, all listening to their own 'version' of this queue.
I think the LV engine get 'confused' and screws this up. I can see many examples of this happening in various parts of my code where queues either become invalid while waiting or are invalid when passed to a subVI, even though a release was never called and their creator VI is still in memory and supposedly 'reserved for run' still...
Perhaps there is some issue with all these VIs being reentrant? I only use the shared clones mode, but none of them have a uninitialized shift register...

I have run into a very strange problem. I am getting sporatic occurances of an error with one of my queues. Here is the error:
Error 1122 occurred at Dequeue Element in Process GUI Events.vi:34->Engine 422.vi
Possible reason(s):
LabVIEW: Refnum became invalid while node waited for it.
The wierd thing is, as far as I know, this can ONLY happen if the queue is destroyed in some parallel process while this VI is waiting for an element to be enqueued. But, I have searched all the VIs and the only one where the queue is destroyed is in the cleanup VI that comes after this VI and is connected by the error wire. So, there is no way that cleanup VI could execute before the VI that is waiting.
I have a sneaking suspicion that there are some latent bugs in the queue feature. I have a large number of reentrant VIs running and I create a lot of unnamed queues that I pass inside a cluster to sub VIs. So, there are many many instances of this queue (all unique, supposedly) that exist within each tree of reentrant VIs. I thought labVIEW used a GUID to name unnamed queues so they could never step on each other, but maybe because I have so many, the 'name' is getting reused?
Any other ideas? I am at a total loss.
thanks,
-John

QUOTE (Val Brown @ Aug 28 2008, 03:01 PM)
This example is strictly standard LV code. No add-ons of any kind were used.
The example is written in LV8.5. I don't have older versions so I cannot down-rev it if you do not have 8.5. I would avoid .NET in 8.2 anyways since there were significant bugs.
That said, I use OpenG in many places in my apps and strongly recommend other use it too.

I'm not completely sure I understand all your questions, but if you are asking if this format can be read, the answer is yes. However, I have found that in LabVIEW, you cannot make a generic XML reader easily. In most cases, you need to know the format of the XML document when you write the LV code to parse it.
That does not mean that you cannot have variable length sections, like you excerpt shows. You can use the features in the MSXML.NET assembly to determine child count and iterate through the elements, building an array in LV. This is something I do all the time. I have a much more complex XML file that describes an entire test hierarchy that has N test plans with N test suites, each with N tests, each with N parameters and N measurements. I am able to read this into a complex array of clusters of arrays of clusters, etc...
So, it can be done, but you need to think carefully about the structure of the XML you will receive and then design you code to deal with the sections that are variable in length.
I also noted that your example stored the data in elements but still used attributes to specify information about each element. So, you will need to deal with extracting that part of the data and use it to organize the elements in their proper order, unless you can assume they will always be formatted in ascending numerical order (which is likely but nothing is ever guaranteed).
Anyways, I think the example code gives you a good start toward reading this but I suspect you will need to modify it to fit your format. My goal was not to make a generic XML reader but rather demo the basics of accessing XML data and protecting it with a schema.
You might also check out JKI's EasyXML tools. I think they might be useful to help parse your XML.

QUOTE (MJE @ Aug 26 2008, 05:37 AM)
Yes, that is an option, but then they would have to select the schema file or I would have to hard code it's name into the code. Also, if I choose to put the schema in another directory, I would have to hardcode that full path into the code. If I left the schema in the same directory and used the reletive path of the XML doc, then there is nothing that would stop them from seeing it and editing it as well.
Finally, by having the schema specified in the XML and having it local to the XML, other XML editors can use this to enforce the schema when editing the file. This helps avoid errors when making edits. My goal is to help the user not make a mistake when editing that would cause the file to be unreadable. If they wish to be malicious, then I really don't care.
But if someone sees value in having the schema specified externally, that can be done. As one of my old professors used to say: 'That exercise is left for the student'.
BTW: I'm glad you like the callback trick! I'l have to post a few more examples of interesting 'tricks' I have come across over the years...
-John

QUOTE (Neville D @ Aug 22 2008, 10:55 AM)
Sorry it took so long. I had to make a generic version and add some comments. I posted it in the code repository in development section. Here is a link:
XML File Reader in CRID

Here is an example of how to read an XML file, extract attributes of elements and also validate it against a schema.
I use this to read user-editable configuration files for my project. By using XML with a schema, I can control the format of the file and verify that it adheres to this format when reading it. This allows me to abort application startup with a meaningful error message if someone makes an invalid edit to the configuration file.
This is also a good primer on how to read XML files using .NET calls. I hope you find this useful.
I also suggest you read some of the tutorials on the net about XML and XSD (schema) to understand how to create and edit your own files. In addition, I recommend XMLSpy by Altova for working with XML and XSD files.
-John
Download File:post-2411-1219705387.zip

Here is an example of how to read an XML file, extract attributes of elements and also validate it against a schema.
I use this to read user-editable configuration files for my project. By using XML with a schema, I can control the format of the file and verify that it adheres to this format when reading it. This allows me to abort application startup with a meaningful error message if someone makes an invalid edit to the configuration file.
This is also a good primer on how to read XML files using .NET calls. I hope you find this useful.
I also suggest you read some of the tutorials on the net about XML and XSD (schema) to understand how to create and edit your own files. In addition, I recommend XMLSpy by Altova for working with XML and XSD files.
-John
Download File:post-2411-1219705387.zip

My local NI rep just told me about a new OOP training class that is being offered on a Carnival Cruise out of Galveston, TX. If you don't belive me, go to NI's site and look up the OOP class and check the schedule.
http://sine.ni.com/apps/utf8/nisv.custed_d...ce_list_id=1000
-John

At the risk of inciting controversy or being redundant to other threads in the past, I am wondering how many people feel that LabVIEW should change its image with regards to the programming language G?
I grow weary of constantly having to explain to people what 'LabVIEW' is. Most seem to think of it as an application, like Excel, that interfaces to other 'real' code. I have a hard time convincing them it is just the development environment for writing programs in the G language.
I also am tired of having all of the applications I create generically referred to as 'LabVIEW'. Often, co-workers will say "this is where our C code is called by LabVIEW" and I want to correct them and say, "no, this is where you C code is called by the test system, that just happens to be written in G. It could have been written in any language and LabVIEW has nothing to do with it." The reason this bugs me is they try to blame errors in their interface on LabVIEW. Lack of understanding breeds fear and contempt.
And NI is not helping matters. I really think it is time they stop calling it LabVIEW. Or, at the very least, make a separate version that is strictly for G programming and have all the HW specific stuff be an add-on called LabVIEW. I mean, how many of us do anything that has to do with a Lab anymore? LabVIEW has left the lab and is out in the world doing everything, everywhere. It needs to be rebranded if it wants to get wider acceptance.
So, here are my crazy ideas:
Call the dev environment something other than LabVIEW (G-View?, DataFlow Studio?)
Make a stronger effort to call the language G and not LabVIEW.
Change the name of VIs to something else. They are not virtual instruments, they are user interfaces, sub routines, functions, etc... And change the file extension to .g
Make the point that any application can be written in G. And if that is not true, then work to add features to make it true. I know more and more of the features of each release are written in G. Why not work towards going all the way and have the G compiler written in G! That is one definition of a true programming language that a lot of CS people use. NI should strive to reach that goal.
Well, that is enough ranting for now. I feel better already. I suppose if no one agrees, then this thread will die quickly and I will get back to work. But, I do hope there are others out there who long to be understood by their co-workers and employers as true software developers.
-John

If you are worried about the user editing an INI file and putting in bad values, my solution is to use a small XML file and an .XSD (schema) file. When the app boots, it reads the xml file for its 'ini' settings. I first validate the XML against the schema to verify it has no errors. If they messed up the XML, then the app will report the error and what line is invalid, then exit.
This is not that hard to do if you have used any .NET functions before in your LV code. I can post some examples if that would help.
-John

QUOTE (BobHamburger @ Aug 20 2008, 09:30 PM)
Like I said in my post above, there is still a GOTO statement in C, but nobody uses it anymore! So, removing them may not be a good idea, but LV training classes should definitely show new DEVs how to avoid them.
Actually, they are a good indicator of LV exp when reviewing someone's code who applies for a position.
I am afraid the new diagram cleanup feature may help hide some applicants lack of experience. But, on the other hand, it might help teach them good style...

QUOTE (crelf @ Aug 20 2008, 07:51 AM)
I agree with that. I also use .NET a lot in my code. It is great for accessing lots of powerful OS functionality. I also use it with XML files and databases. Like I said in my first post, the only cavet is to not do any large enumerations within a loop. Unfortunatly, since .NET was designed to work with Virtual Machine languages (VBScript, JavaScript, etc), most of it's data access is in the form of enumerations (get the count, then loop on the count and get each item) instead of passing blocking of memory directly.
The ArrayList 'trick' on Brian's BLOG shows how to get around this. I use this when retriving record sets from a database and it speeds things up by at least 20x.
The problem is the overhead LabVIEW experiences when calling out to .NET. I will see if Brian can provide a description of this and what, if anything, can be done to improve the situation.
-John

QUOTE (Pollux @ Aug 20 2008, 12:23 AM)
At risk of being off topic, I will answer your question:
Sequence structures (IMHO) exist to allow text based programmers who do not 'get' the idea of dataflow to explicitly control the sequence of execution. They are like the 'GOTO' statement of G programing!
I have yet to find a situation where they are required. We use the error wire to control execution where needed. We also modularize our code as much as possbile (lots of sub-VIs). And we limit our diagrams to one screen (99% of the time).
The key things I do not like about sequence strucutures are:
1. Violates dataflow within a single diagram.
2. Stacked strucutures hide code and force the code to have wires that flow backwards to pass data between cases.
3. Flat structures make for large diagrams that are messy.
Don't get me started on Globals either... (that horse has been beat to death anyways on LAVA and InfoLabVIEW)
-John

QUOTE (Ton @ Aug 19 2008, 10:17 AM)
Unfortunatly, this is not true. LabVIEW is wildly more inefficient when calling .NET functions than C#. This has to do with the method NI choose to perform the calls to the .NET interfaces. I don't know all the technical details (Brian Tyler, formerly of NI, once explained it to me in detail but it was over my head) but I do know to never make a large number of calls to .NET functions in a loop in LabVIEW. Instead, build a simple c# assembly to do the loop processing and then pass the data back to LabVIEW inside and ArrayList. Check out Brian's old BLOG for the details. If you can't find it, or don't understand how to do this let me know and I will post an example.
-John

I think you have all inspired me to come up with a new interview question:
"Can you carry on an online conversation and stay on topic for more that 3 posts?"
I guess I should just give into the thread hijackers demands and move on...
Thanks for the on-topic posts. I actually put some of them to use today. Unfortunatly, the person I interviewed ended up having only 3 weeks of LV exp and the code example they brought was a perfect example of what not to do. I politly suggested they invest in a copy of the "LabVIEW Style Guide" and left it at that.
My favorite response was to the question "what is one of your favorite features of LabVIEW?". They said the 'film strip-looking box'... My co-worker almost cracked up when he said this, since I have banned the use of sequence structures in our company's LV code.
Now back to the latest tangent...
-John

Thanks for all the good suggestions!
This is a very interesting topic. I will be using some of these questions today!
As for the tangent about asking about money, credit scores and marital status, all of that is strinctly illegal to ask in the USA. We have to be very careful about not violating anti-discrimination laws. My job is to evaluate tech skills and personality relating to their ability to work within my team. For me, a motivated person with good communication skills and ability to learn new things quickly can overcome any lack of current technical skills. You can't teach someone who does not want to learn!
-John

We are currently interviewing people for a few positions that either require LabVIEW experience or it would be a plus. I have been trying to think of some good questions to ask the applicatants and figured that other LAVA members might have some good ones to suggest. So, what is your favorite question(s) to ask a prospective employee to see what level of G skills they have?
If you don't want to post the answers, that is fine. I would assume the questions are not so hard that the answer is obvious to anyone with decent LV experience.
I suspose some people looking to get a LV coding job may read this so giving them the answers may not be the best idea, but I figure if they are smart enought to read LAVA, they should know a thing or two about LV already.
thanks for your help.
-John