Pages

Wednesday, November 29, 2017

Not posting here does not mean that there are not things going on. For the past few months, I have been writing on Quora, with the intent of getting back here. In fact, the whole aspect of truth engineering has been brought out there. The intent is to update this blog in the near future.

For one thing, I ran into an interest in Common Lisp on the part of the younger set. There are several manifestations in use. One called clojure has been used for both front-end and back-end work. What is suggest to me is that we can start to talk a workbench approach.

If one listens to some AI/DS/ML/DL (that is, Artificial intelligence, data science, machine learning, deep learning) practitioners, one sense can an out-of-control situation. Gobs of time, money, space get gobbled running against suspect data, one might say. As, this information is wide and not very deep. So, who is deep? In other words, where is the science?

Too, there is a general malaise with regard to not understanding. That is, the computer comes up with something, and the humans cannot provide anything that allows comprehension to look at it and make a judgment. No, the reaction is to think the the computer is out of our league.

That might be true in several senses, but it is not in terms of truth (hence, we need to engineer this).

Wednesday, July 26, 2017

On Quora, there is a lot of discussion of AI. This whole framework is warped as it does not consider several important issues related to humans and computing. It is as if the guys of SV (I say silly) can run amok (boys being boys, see USA Today, 7/25/17), and we are just to marvel at their talent.

Think again. A long tradition is being ignored. Why? Engineers do not care for humanities nor do they give a mote for philosophy. Psychology? Nada.

Well, how can we address that? Truth engineering was proposed as a way to discuss the matters that pertain to the issues. Last post, I mentioned hermeneutics (see starting list). Since then, I ran into Crapularity heumeneutics (think the huge interest now in the Singularity). I agree with Florian but see a whole lot of other ways that his concept can be used. All in good time goes the adage.

Later, I ran into sociology of knowledge (paper by Inanna Hamati-Ataya). So, I am collecting an initial viewpoint with which to launch discussions and work.

Recently, someone asked this: What is the difference between Lacan's and Jung's literary criticism? Well, I had already considered this topic from the Jungian sense. Mostly, it would have been done by some later follower of Jung, and archetypes would play a huge role.

But, Lacan? Well, on looking him up, I found that he liked Freud's approach. Too, he gave Chomsky grief. There is something to like about that. Hence, there is a thread from Freud to Jung to Lacan that needs some attention. Chomsky is in there to pull things to the modern times.

I pulled this out of a book on Lacan.

Note that Lacan talked what I might call peripatetic leanings. As he does raise a question. There is a bias for the brain. Look around, it's everywhere. AI (and SV) have taken that to an extreme. Oh, they say, replicate the brain and end up with intelligence that is artificial (and supposedly better than human).

How did we get that bias? When were those other types of intelligences (say, Gardner's; he of Harvard) thrown out?

Lots to discuss. The benefit? A more full view so that we can dampen both the mania and the hysteria. Perhaps, then, Hawkins could sleep peacefully.

I am proposing that, like Gardner of Harvard suggests, intelligence is multifaceted. The IQ test looks at a small portion. There may be 'smart ways' (perhaps, modes) that are not seen by the high-IQ. Or, rather, the higher IQ would require additional training.

And, not being controversial, broader scope for 'best and brightest' might be a wise choice on the part of society. What would that scope entail?

Thursday, May 11, 2017

Ten years ago, I started this blog. The first post was "Truth, can it be engineered?" Overall, there have been 284 posts. The topics have followed the times.

Early on, there were posts about engineering. The key topics were 'earned value' and 'middle out' which are constant issues. Then, the topics changed to finance due to the downturn, its consequences and the long road. We still are in jeopardy.

Prior to starting the blog, I was doing Wikipedia edits. I pulled this little graph that shows edits by year. It can be broken down further, but the timeline shows interrelationships.

Truth engineering started under the auspices of working in a knowledge-based engineering (KBE) environment. With all of the emphasis lately on AI (due to machine learning and deep diving of data), one wonders what is the basis for all of this. Well, KB, and its offshoot of KBE, were there from the beginning. And, they will continue. We will address this more here as we go along.

In the meantime, on a related page in Wikipedia, there was a request for real examples. I briefly sketched two recently. See . Talk:ICAD (software) for the examples.

Also, in terms of getting a plane to fly, it is more arduous than making a little smart phone. And, it demonstrates going up against nature big time. Nature is the chief guide. We have to conform and do so smartly. Nothing new there, as engineering has been around from the beginning.

What is new is the computer? What? And, social media and fake news. Don't blame the media, rather we need to look at this stuff from a new angle. Hence, truth engineering, for one thing.

---

Post note: There was no reference in this blog to KBE over the years. Why? There were two in the related blog: Out on a limb and Here we go again, III (only a cursory mention). I suppose that I was looking at truth beyond the computer. Guess what? Jobs and his mobile gift has changed the landscape (and his cohorts with their various clouds, too). And so, can we go forward without knowing that 'truth' is computational, albeit with natural or artificial resources?

Wednesday, April 26, 2017

Code has been the topic of several posts. Usually, it pertained to issues, such as content vs configuration management (from the 2014 time frame). The point? If one is doing work and the computer is the tool and assistance, then one needs to control reconfigs. That is, the producer of the system ought not come in and monkey with your process.

Yes, it's a control issue (see Stallman on this). Too, if the computer is driving things, then you have to follow. But, software people do not do this. However, they are not known for a good process, either.

Three years ago, I redid a website using only HTML/CSS. Why? It was familiar; too, I could control everything. And, it was minimal. The idea was to build upon this. I started with Microsoft's OfficeLive, but it was cut. I looked at a bunch of alternatives. None stood out.

Besides, in just a few days, I saw lots of hacking. What's with that? So, I went with not-so-simple HTML/CSS. Mind you, these things continually progress.

Want to know what CSS can do? A lot. Basically, it is parameterized code. This page shows three examples. #1 is a cut from MS OfficeLive. #2 deals with CSS (banner.js). That is, the objects are drawn and clipped (then I cut to an image file for ease). Now, #3 uses JavaScripting to do the same image.

I need to reconfigure to allow more interaction. And, so, need to bite the bullet. That example got me back into things. It basically writes on a 2D canvas.

But, I just went and looked at 3D and found this by Jeremy Heleine. Nice.

It is a trip down memory lane. I was doing 30+ years ago with a Lisp Machine (What is a lisp machine and what is so great about them?, actually, wrote my own perspective handler). Ten years later, I was using a top-notch CAD/CAE system, around which we wrapped Lisp for Knowledge Based Engineering. Ten years after that, I was doing this with Python (as it was interpretative and mimiced Lisp, somewhat).

Essentially, I dealt with geometric modeling with regard to massively detailed products that supported decisions related to design, analysis, and the whole gamut of build and maintain.

Friday, March 31, 2017

Today, we hear rumblings that Russian paid trolls to put out fake news. Heck, Silly Valley has allowed us to get to a situation where we do not know what is what.

And, we just followed them down the path of perdition (I have used perdition a lot in these blogs over the years). How can anyone be surprised? We ought not have let loose the Internet as we did. So, now, it's a mess. It was a mess the whole time after the commercial thrust got going.

I have harped on this a long time. In fact, from a mobile sense, I am 1G/2G. From a computational sense, I do a lot of local processing. I rent a VM for my website. Cloud? It's a muddy mess. I do use it for storage. Carefully.

When I see Silly Valley's results over the past couple of decades, I grieve. I'm old. But, the youngsters are going to have to clean up the messes. Oh sure, jobs.

Thursday, March 23, 2017

KBE, as a particular example of KBS, is a multi-disciplinary framework that has more than practical considerations. Not only will KBE require successful handling of issues of the computational (Ontology, Artificial Intelligence, Entscheidungsproblem, Interactive computation, Category Theory, ...) and logic (non-monotonic issues related to the qualification, frame, and ramification problems)), it will touch upon all sciences that deal with matter, its manipulations, and the related decisions. In a sense, Product Lifecycle Management allows us to have the world as a large laboratory for experimental co-evolution of our knowledge and artificial co-horts. As noted in ACM Communications, "Computers will grow to become scientists in their own right, with intuitions and computational variants of fascination and curiosity." [19] What better framework is there to explore the "increasingly complicated mappings between the human world and the computational"?

A continuing theme will be resolving the contextual definitions for KBE into a coherent discipline and keeping a handle on managing the necessary quantitative comparisons. One issue considers what limits there may be to the computational; this study requires a multi-disciplinary focus and an understanding of the quasi-empirical. Given the knowledge focus of KBE, another issue involves what limits there might be to a computational basis for knowledge and whether these are overcome with the more advanced types of human-machine interface.

It is important not to treat the KBE technology in isolation, but focus more on its role in the overall Product Development Process (PDP). During development, it is important to streamline the process from knowledge capture towards software implementation. To this end, close-coupling between Knowledge Management and KBE is desired. Transitions from data and information inside a Knowledge Base towards software code is of particular relevance. The best results can be achieved by using model-driven software development principles, which includes automatic code generation and round-tripping. The use of KBE during a PDP not only requires the ability to easily set-up (existing) KBE applications from a knowledge level, but also the ability to store back the results after execution of the tool. In order to use KBE on a strategical level as decision-making and planning support mechanism, it is important to relate results back to the system engineering domain (requirements, functions, options, embodiment). From deployment perspective, a better integration with other IT tools should be realized. Couplings between KBE applications, Knowledge Bases and Simulation Workflow Management software are of particular importance. The iProd project tries to take KBE to the next level by addressing these aspects.[20] The iProd framework uses KBE technology as a reasoning mechanism to infer new product knowledge and, as a means to automate virtual execution (CAE simulation) and as MDO-enabler. On an IT level, it prototypes KB-KBE couplings (code generation, round-tripping, results storage and automatic workflow generation) and SWFM-KBE integration (on the basis of the software-as-a-service paradigm).

Wednesday, March 22, 2017

My purpose was to review progress by looking at the tools that were available. For a month, I went through the lessons and worked the exercises. It was interesting to see the on-line instruction. Naturally, I found a few bugs but overlooked them. At the time, it was free; now, there is a Pro option with more of a focus on results.

Also, at the time, I saw testimonials about people starting careers with this approach. That was interesting. Today, I ran into another (different method) that represents the time. The guy has blogged about his introduction into coding and the success that he has found.

Then, I saw that one of morning show hosts talked to someone about an effort to teach coding to women. There are thousands of jobs open, I understand.

Hence, this little reminder. One problem with so many people just coding is that we're losing sight of issues related to quality (many issues). And, one sees this lack everywhere. Quality would include the user perspective, especially concerns related to safety, stability, and such.

I have already written about little businesses being bitten (Content and more). I just ran across an issue myself the past week: Technology's impact. So, we have issues related to content's management, efficacy of technology's use, and whole lot more.

But, a very important one relates to this theme: Does code matter? Based upon my experience, this glut of jobs is short-sighted. Code does not provide the business intelligence that everyone is after. Now, if code is mainly small perturbations (such as was brought by macro ability in spread sheets), then, that could very well be content related.

However, how many times do we need to re-invent the wheel?Remarks: Modified: 03/23/2017

03/23/2017 -- Two recent videos (stumbled upon them). 1) Peter Metzger talks about his 31 years with Emacs. He shows a little of the history of computing devices. Run down memory lane. But, nice to see the old thing, Emacs, still having traction. 2) Which then brings up Richard Stallman. Richard started Emacs ball rolling (some have followed his sainthood into the editor wars - Emacs vs VI/VIM).

Also, he was first in looking at truth maintenance. But, in this video, Richard talks about his free software initiative and emphasizes, time and again, that purty systems, like Apple, have created a jail, made it purty, and then convinced people to jail themselves. He does not like what I call the dumbing device. He stresses surveillance. I see it as a tether to nonsense. Hence, I am 1G/2G, even in this day of talk about 5G. In terms of truth maintenance, Richard and his advisor (Sussman) worked on constraint satisfaction. Another aspect deals with defeasible reasoning.

Saturday, February 25, 2017

Of late, I have started to address these. That is, way before the even of November. Then, fake news came about, mostly due to social media. Gosh, were some of these things foreseen?

Who do you believe? There are lots of things to discuss about how truth engineering fits in the picture. And, time will tell how this will go.

I have been using a blog on Quora (see Psychether) to lay out some thoughts. Today, I picked up a nice graphic from Craig Weinberg. Again, lots to discuss, however truth (and its assessments) is very much related to consciousness. I have been sampling work that is cosmological or physics in basis.

Now, the subject used big T. Consider that we do not have to extend the view that far in order to be effective.