Implode (coding) a long code (input data) in a short key code (output data).
Explode (decoding) the short key code back in the original long code.
It's not allowed that two different long codes generate the same short key code.

How do you implode and make the key code?

Use unique original data as reference and replace double data by coordinates that referrer to the reference data.
The unique data(*) + coordinates + separate codes must be shorter then the original data.
This process can be repeated till there is no implosion any more, some extra data must be added to the key code or a fix count must be used, what's left is the key code.
It's not allowed that two different long codes generate the same short key code.

* Unique data can also be separate from the coordinates as reference memory

How do you explode and get the original data back?

Split the key code in unique data(*) and coordinates according the separate codes and find with the coordinates the original data back in the reference data.
In case a repeating algorithm was used this process can be repeated according the extra data in the key code or the fix count if used till the original data is back.

* In case of a reference memory the unique data is not in the key code so the reference memory is also needed to decode.

Is it possible to implode data to a shorter key code and back in the original data in all cases?

Yes and no, if you have 100,000,000,000 different DVD titles the key code also need at least 100,000,000,000 different combinations.
So if there's only implosion in the cases where it's needed (that 100,000,000,000 different DVD titles) and explosion in the other cases (the theoretical almost endless but not existing DVD titles) you have a practical model.
When the original data is a repeating of something similar, for example in case of a video signal where every frame (screen) has a fixed amount of possibilities then the amount of possibilities in one frame multiplied with the total amount of frames in the longest video what must be possible to store can be used as minimum key code length. I think this can work in case you generate a key code for every frame and add them all behind each other, from the result can be generated a shorter key code again. In this case it must be possible to view one frame or some frames behind each other by only decode the frame key code what must be viewed so the decoder and player don't need a lot of storage to decode the hole movies in once and you can switch to every frame you want in a very short time after the total key code is split in frame key codes.
This same model can also be used for other data for example books, in that case you generate a key code for every page and then from all the page key codes you generate the book key codes, you can even generate from all the book key codes a key code for a hole library of books. You can then again decode book by book or page by page without the need to decode the books or pages all in once.

* VB6=Microsoft Visual Basic version 6.0
** Compression Methods V1.04 is not running under Visual Basic 2003 .NET without adjustments
*** Old school can be a new algorithm but not the different approach endless compression way. As long there are no public endless compression demo's old school is still the best available.
Microsoft Visual Basic Home page

Used methods:

Bit - Robert Langley method convert every 32bits in 31bits what can be repeated(*).

Byte (8bits) - Sandy Shaw method replace a double byte (8bits) by coordinates shorter then 8bits what can be repeated(*).

* repeated means that the output can again be used as input.
** repeatable means in this case that it looks like that Sloot method make a shorter key from more reference memory coordinates by using that more reference memory coordinates as input data again and use again the reference memory or using an other by me unknown method for this part.

Robert Langley, Sandy Shaw, Peter St. George, Jules Gilbert and David C. James all claim that their method is not affected by random data for example generated by a random number generator (you can download my favorite random file 1,000Kb I use to test my own build software).

According to both Claude Shannon "Shannon" and the Klaus Holtz "Autosophy" information theory, random bit sequences cannot be compressed without data expansion (negative compression ratios). Also Charles Bloom says "There are no magic compressors that can pack any data" see his proof and see also David SalomonThe Counting Argument. So almost endless lossless data compression looks impossible or is random data not as random as we think? Story about randomness.

Adam Clark said in this interview "They were thinking the wrong way. I think it's my different approach. They think there are certain rules that can't be broken, but I didn't know the rules. I didn't know my limitations."
In the Dutch book "De broncode"(*) Jan Sloot talks about an other way of thinking, something what work at hardware level, what's an other way of coding he also named it "seven" translated from Dutch it can also mean "to sieve" or "to filter". He didn't use zeros and one's any more because that was two dimensional he explains that there are three dimensions. May 10, 1999 Jan Sloot also wrote something like this "Since I also don't believe there are compression methods possible which for example can store a video film to less then 100Kb, I have searched for another method. After many years experimenting, I have succeeded with a very new technique, without using of compression methods, all types of data can be stored on a medium of maximum 128Kb and can played back without loss of quality and speed."
Peter St. George said that what his company has done is possible "because we took an approach that took us out of the box that everyone was living in." and he said "People want to define themselves by limitations," "[People] want to put [themselves] in a small tank because then we're a big fish." also he said "We can compress every single permutation of an N-member set." and "With our technology, you have to have a minimum set of base two binary carriers to create a multidimensional construct. Once that construct has been created, we can create a random sequence, a pattern sequence. It doesn't matter." and "Yes, base two is synonymous with binary. You need at least 100 bits, let's say, to create a multidimensional construct. Everything in an N-member set can be expressed in an N-1 set. You can reconstruct the H set without losing a bit in the process."
I remember that as teenager I saw binary code for the first time in a table with all the possibilities and I thought what a waist to store all that bits to make smaller numbers fitting in a fixed length. I understand that the fixed length was for making the decode process working but I always wondered if there wasn't a smarter way. Jean Simmons from Premier America Corporation says in a white paper that their MINC (Minimizing INformation Coder) solved the length problem.

It looks like most inventors didn't use data compression but found a smarter way to describe data what made it more efficient then the Shannon way. In that case it's understandable that with some systems it was possible to present the output data as Shannon input data again to repeat the process.

Klaus Holtz is the only one who explains very clear how his Autosophy Information Theory works and they have real time video compression demos.

Jan Sloot's principle looks like that of Klaus Holtz with the different that Sloot made a fixed static reference memory with all the unique data already in, while Holtz made it dynamic as a self learning system, also was Sloot final output key only 1Kb in size.
As written in the book "De broncode" Sloot used 5 algorithms where he needed 12Mb for each algorithm what included storage for temporary calculation. He was working on a new application what needed 74Mb for each algorithm to store the temporary calculations for longer movie/TV programs, probably to store the bigger amount of frame keys after the 1Kb input key was decoded. The advantage of Sloot system was that it was possible; to add in every electronic device the processors with the algorithm included the reference memory and memory for the temporary calculations storage. After that only a one 1Kb key code for every movie or TV program was needed to generate the frames for displaying it at a display device.

Let's say one movie/program frame is 1024x640=655,360pixels
According to Jan Sloot second patent:
One block is 16x16=256 pixels
And 64 blocks are one row
Then there are 655,360/256=2,560 blocks in a frame
And 655,360/(256*64)=40 rows in a frame

If there are 25 frames a second and a movie is 90 minutes then:
There are 655,360x25x60x90=88,473,600,000 pixels in a movie/program
88,473,600,000/256=345,600,000 blocks in a movie/program
88,473,600,000/64=5,400,000 rows in a movie/program
88,473,600,000/38.125=135,000 frames in a movie/program

check if all blocks, rows and frames are only stored once and that in case of double ones only coordinates are stored

42 storage

(chip card) keep a copy of the movie/program memory (40) and calculations from the key processor (41)

43 input-output equipment

(chip card reader)

44 key processor coding part(*)

stores the movie/program code in the movie/program memory (40)

* Also digital video signal output.

In the above example pixels are used but it's also possible with audio or text.
Details about the reference memory storage and the key code algorithms are not explained in this patent description.
If for example a video input pixel is 1byte then for example every coding part (5 in total) must generate an output key 40 times smaller then the input data to end with a 1Kb key.
88,473,600,000bytes/(40x40x40x40x40)=864bytes (without audio).

Sandy Shaw explains also how his Multipass HoloDynamic Compression works, but I miss some details how to make it work in all cases.
Let say we have a string like "RESERVED" to store them you need 8bytes=64bits.
If we replace the fourth "E" by the coordinate of the second "E" let we say position 2 and we add the position of the remove fourth "E" let we say position 4 then our coordinate becomes (2,4). Because the maximal possible coordinate shall never be greater then 8 you can store a coordinate within 3bits. So the total length of the coordinates becomes 2x3bits=6bits. We do the same with the fifth "R" what becomes coordinate (1,5) and the same with the seventh "E" what becomes coordinate (2,7). So in total we can replace 3bytes with 8bits length for 3 coordinates of 6bits length each, so we save 24bits(3x8bits)-18bits(3x6bits)=6bits. Because the first byte is always a reference byte we shall only use 7 of the 8 possibilities who are reserved in the 3bits coordinate length.
The trick what is not explained at his pages is how to store this reference data RESVD and coordinates (2,4)(1,5)(2,7) so you can get it back. We can solute this for example to use the first 3bits to store the number of coordinates because there can never be more then 7 coordinates it's fitting in 3bits. So it becomes (3)(2,4)(1,5)(2,7)RESVD. The decoder can now get it back by reading first the first 3bits and then read the number of coordinates according to the number that has been stored in the first 3bits. So three times 6bits, and then read the reference data after calculating the number of bytes 8bytes-3bytes=5bytes reference data left. Then according to the coordinates of the missing bytes, these can be added back at the right place. Because the first coordinate start at position 4 the first three reference data bytes can be placed back so RES, then we add the first coordinate (2,4)=E to get RESE, then we add the next coordinate (1,5)=R to get RESER, then because the next coordinate start at position 7 the fourth reference data byte V can be placed back to get RESERV, then we add the last coordinate (2,7)=E to get RESERVE and final we ad the last reference data byte D to complete it as RESERVED.
Because we added the extra 3bits to store the amount of coordinates the total saving is 6bits-3bits=3bits. In this case we only coded one time, but it's possible to use the reference data, coordinates and separate data as input again. In that case the number of coding passes must be also stored in the output data, so that the decoding can be done the same number of passes.
The above example work only when 25% or more from every 8bytes is double. When using longer blocks then 8bytes the chance for double bytes become higher but also the coordinate length so there's another solution needed.

Robert Langley used bit compression.
First the binary search method (see the patent description for details) is used to make from every 8bit integer a seven bits output if the input value is odd or a six bits or less output if the input value is even.

The removal code indicates the numbers of bits removed by the binary search method.

For example the input is "110011010010" a total of 12 bits.
First we split the input, then the output becomes:
1100=(10)1
110=(0)1
100=(10)
10=(0)

So (10)1(0)1(10)(0)

To decode it, we need separate bits "o", they are added according two rules:

Type A - when hanging bits are present then the separate zero code must be inserted direct behind the hanging bit.
Type B - when no hanging bits are present and the removal code has one or more high-order one bits then the separate zero code must be inserted direct before the start of the next removal code. This type is not clearly explained in the patent description, because to what removal code we are looking to?

According rule A we insert two zero bits (10)1o(0)1o(10)(0)

There are now 10 bits left: 1010010100

According the patent flowchart (figure 3) we can read the original input back.
When we try all possible combinations we get a problem:

In the patent description examples is exact the 10100 and 100100 combinations missing.
I tried to fix this but got then no gain any more and in an other fix sometimes gain sometimes not. The question is, is this to solve? When looking to the patent what describe a hardware version what goes much further then only this step, the inventor had it working and maybe didn't want to write the working algorithm in the patent to prevent copying his invention.

Strange things:

Jan Sloot just died (July 11, 1999) some days before the date he needs to register the source code. This can always happen, but there is never found anything back from all his papers or equipment what can explain his inventing, while there are people who have seen that he had wrote it down and also they have seen much more equipment and papers then what's found after his dead.

The death cause of Robert Langley (Sept 14, 2001) and also what happens with his company and license is unknown to me, if somebody has more information let me know.

The ZeoSync Corporation website shutdown (quickly after June 3, 2002) after they announced (January 7, 2002) their invention to the world and after they fund (round March 1, 2002 an extra 40 million dollar private stock (the company was already started with 10 million dollar private funding).

Media World Communication Limitedacquire (October 1, 2003) Inventor Adam Clark Adams Platform Technology (APT) licence after the Tolly Group(*), CSIRO and the Monash Universityproofed(**) that the inventing was working.
Just before MWC want to start selling stocks at the Australian Stock Exchange (ASX) again, suddenly (September 3, 2004) the APT inventing is not working as expected and they even use a copy of the On2_VP3 codec(***), this after 7 years research and improving the inventing and many successful demonstration(****)! It's clear that here are tested two different systems where the Tolly Group tested the real APT system.

* The Tolly Group test for 3Com, Cisco Systems, Inc., Dell Computer Corp., Hewlett-Packard Co., IBM, Intel Corp., Lucent Technologies, Microsoft Corp., and Nortel Networks.
** Update: Adams Platform proof removed off Tolly website September 23, 2004!
*** On2 codec company website www.on2.com started January 25, 1999, years after Adam Clark showed his inventing!
**** Screendaily August 17, 2001 wrote 'Shrouded in mystery and under tight security, some 20 specially selected guests, including cinema executives, were this week given a demonstration at the Hard Rock Cafe in Sydney, of something that could change the face of video distribution into the home. What they saw - and an "independent" executive from high-profile consultancy Deloitte Touche Tohatsu confirmed there was no smoke or mirrors - was full screen video streamed in real time from a server 1,000 kilometres away, using Internet protocols and a standard PC, analogue phone line and modem. There was no down load time, no broadband infrastructure and impressive picture quality. "We would not be in this if we were not 110% sure that the technology works," MWB chief executive John Tatoulis told Screendaily, admitting to feeling a bit like a snake oil salesman. "It is real, it was developed in my home town, and we now have the opportunity to revolutionise the distribution and entertainment industry."'

Overview inventors:

Alive/company active Klaus Holtz (Omni Dimensional Networks www.autosophy.com), Dynamic library, a friend and I studied the many patents, build by the friend in 2001 and it compressed TIFF pictures 200 times.

Alive/company active Sandy Shaw (Fractal Genomics/Health Discovery Corporation, old homepage, www.healthdiscoverycorp.com), Endless compression, Reverse discovery(**), I build this in 2004, it was working but not at random files.

Alive/company active Garry Barker (Blaze International Ltd mirror www.blazelimited.com, www.pixe.com.au, April 7, 2005 patent, Pixe is a pre-compression filter, allows a video file to be reduced to one-450th of its original. Pixe reduces a file by up to nine times before it is compressed in the usual way, usually to a maximum of 50:1 by one or other of the 30-odd standard codecs, more info.

?/company offline Peter St. George (ZeoSync Corporation mirror www.zeosync.com), Endless compression, 2002 patent parts 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24, I looked at the info on the Internet in 2001 and quick into the patents in 2003, but too difficult for me :-) See for brief technical details Susan's diary.

?/company disappeared Earl Bradley? (WEB Technologies), studied in 2003 info missing, brief description in Byte, compress files of greater then 64KB in size to about 1/16th their original length, "In fact, virtually any amount of data can be compressed to under 1024 bytes by using DataFiles/16 to compress its own output multiple times." They have a chart that shows that 67,108,864 bytes can be reduced to only 994 bytes in only four iterations. See also this article

Alive/company active Daniel Kilbank (Qbit, LLC/Portavision LLC, www.qbit.com), 02/06/2003 patent, 02/06/2003 patent, 05/06/2004 patent, 10/21/2004 patent, 10/21/2004 patent, 10/21/2004 patent, 11/04/2004 patent, 2003 patent, Z-Image which losslessly encodes and decodes original, raw, raster data, 3 to 5x lossless encoding improvement using interframes, and 10x lossless using intraframes. Qbit’s core technology is a set of algorithms that, unlike current compression methods, "account for every pixel, and for every bit that makes up the file type," Kilbank said. "You have a unitary transformation. What comes in, in the mathematical formula, is captured, catalogued, and run through a series of theoretical physics-inspired models and is reformulated into a new digital file. At the receiving end, the reverse process takes place. More about Daniel Kilbank

?/company disappeared ? (Zyndecom Technologies, Inc, mirror www.zyndecom.com), Z Boxx or Z Box combines DryIce high compression technology, which can compress still images up to a 4000:1, with Syndeos synchronization of both software and hardware systems. See also this news article, eCom eCom.com, Inc own 51% Zyndecom and PremierSoft owns 49% Zyndecom.

?/company active Dennis Montgomery (eTreppid Technologies, LLC, www.etreppid.com) 10/10/2002 patent, 08/05/2003 patent, 28/07/2004 patent, 15/09/2004 patent. Lossless compression ratio of 10:1 on NTSC/HDTV movies and lossy compression ratio up to 200:1 on NTSC/HDTV movies. A lossy movie can be stored and played on CD and HDTV movie on DVD. Uses a patent pending advanced multi-pass, adaptive and pattern matching algorithms to significantly decrease the file size of still images. Compress audio smaller than MP3 with better quality. Increase transmission bandwidth of existing copper, fiber, cable, and satellite. Increase in disk storage capacity. Look at the old mirror website for more information. eTreppid won a federal military contract worth up to 30 million-dollars.

Alive/company active Nils-Johan Pedersen (FileFlow, Inc www.fileflow.com, Sharpbyte Ltd. www.sharpbyte.com), FIT format is 2-10x smaller then JPEG (update: compression of images typically yields ratios ranging from 5:1 up to 100:1 and above), details are missing. New info about FileFlow that they use normal compression: wavelet-based still image compression algorithm. Large image files are reduced an average of 25:1 with no visual loss of quality. Non-image content will be losslessly compressed using ZIP compression and all files will be encrypted using 2048-bit encryption during transmission.

* Online Dutch to English translation.
** With the words "Reverse discovery" by Sandy Shaw I guess he takes a sample data for example from some stocks in a period of time and see that as key, then explode it (with an own invented Holy Grail algorithm, HolyDynamic Compression?) to something bigger to see the connection behind the data as which stock influence which other stock at what moment after what movement. Maybe he uses this method to discover the working of human genes in the company he joined.

The future:

That not everybody is happy with endless compression inventions is understandable.
But the Fifth Force and MWC wanted to give the inventions in license so everybody can use it. Ok some companies need to adjust drastic but with all the new possibilities finally there is an overall gain.
If secret services use endless compression to encrypt their data they can also be not so happy that everybody start to understand the principles. With some eye blinking or other body movements you can send a code with a whole story behind. But it looks like there are more then one endless compression algorithms possible so also here must be a solution or start using the G-Com Technology from inventor Dr. rer. nat. Hartmut Müller then nobody can tap into the signal.
An other problem can be that they can't tap others anymore but that is now already possible when others use a G-Com device and if you make systems public you can buy a license yourself to decrypt. Read at the often very well informed James M. McCanney website where is standing about the group Heaven’s Gate "The government killed all those people. And the reason they killed them is because they were in competition with a very large software outfit, and the Heaven’s Gate group were all programmers. They were building encryption software and firewalls that the government couldn’t break into."
It looks like that Kevin Short is successful with audio compression and Arvind Thiagarajan with Image compression and Kyle Kirby with hard disk compression and Amit Singh/Balraj Singh with fast network products so maybe fast video streaming/compression is a problem. I can imagine many military products where fast video streaming/compression can be used and also they can be not so happy when all nations walk with cams at the battle field.
Then is left the for me unknown force what kept many inventions away for more then 50 years till today. Can an endless compression theory lead to understanding of the principles behind the other secret inventions? In this case it can be more difficult because one of that inventions is the disintegrator(*) what can change living matter in dust. In this case it can be good to wait till humans are ready to handle the secrets of nature. If this is the case contact me with an understandable explanation then I shutdown this website.

Why scientists find diagnoses of 'x-ray' girl hard to stomach the real problem is clearly explained by Victor Zammit in 'NEW SCIENTIST' fails to pass the 'legal test.'
What has this to do with endless compression? Even if an inventor show his/her inventing 150 times successful in demos/test with scientist, technicians and businessman watching it, it's never enough. Why? Because it's not in their study books or worst study books say it can't work. Introducing something new looks like impossible with this attitude. Better is to research and explain all the unexplained then fraud or new knowledge is automatically unveiled. Shouting 'its fraud' without proof is as stupid as shouting 'it works' without proof.

* In The Secret Doctrine (October 1888) by Helene P. Blavatsky Vol. 1 page 554 till 566 is explained that inventor John Worrell Keely already demonstrate his Compound Disintegrator (Picture 1,2,3,4,5,6,7) with the sentence "Had Keely been permitted to succeed, he might have reduced a whole army to atoms in the space of a few seconds as easily as he reduced a dead ox to the same condition". Also inventor Nikola Tesla invented a Death Ray machine, see also an article about him in the New York Times (September 11, 1940).

Some loud thinking to tickle the mind:

It looks like that again the nature beats the human, because the distribution of matter in the logarithmic space of the universe is spread like a formula according to the Global Scaling Theory.
With a little fantasy the mind is only a indirect real time decoder from a key (DNA) given by birth that generate the world your self conscious thinks to see from an existing reference memory (universe) or every cell DNA (key) is resonating (decoding) at the universe medium rhythm (algorithm) to generate tissue and what we see. In 1996 I concluded that the DNA must be a unique key what communicate with a timeless medium what must exist in the whole universe. The Global Scaling Theory explains that medium (global standing gravitational wave) and distribution of matter and Klaus Holtz and Jan Sloot explains the key (DNA) and so indirect live. It looks like that the Golden Mean of universe and compression is related and maybe we say later "they hacked the universe".

News/links:

Follow live how unknown powers can wipe the Adams Platform out of the public:
Step 1. let people think it's not working
Step 2. let people think that the inventor is a cheater
Step 3. change and erase all working materials, positive reports and papers
Step 4. cut all money flows to the inventor and company
Step 5. get the inventor and company bankrupt and in prison
Step 6. let the inventor lose his intellectual properties
Step 7. wipe all evidence left, if there did never exist anything

Arvind Thiagarajan May 28, 2005:
Original MatrixView reply from news-group discussion new ABO compression:
"This is an official response to the recent posting that has appeared on
this website. We wish to confirm that ABO is an original technology
invented and developed by Scientists at Matrixview. We would like to
further confirm that none of our commercial versions contain any kind
of open source codes. ABO is a revolutionary algorithm that deals with
pixel data in the spatial domain. This will be clarified further when
we publish the theory of ABO on our website in the following weeks. As
much as we appreciate your interest in ABO, we request you to hold your
comments till reviewing our theory paper after which, we will be happy
to engage in further discussions. In the meantime, if you would like to
contact us on anything, please mail us at i...@matrixview.com

-SSS"

This MatrixView reply is a more detailed then the one I received in November 2004.

It looks like this early EchoView test version (27 January, 2004) use the compressor.dll.pec2bac with "ABC - Advanced Blocksorting Compressor code" as suggested at Geocities, but I see also an ABOImgCmp.dll what give the idea that MatrixView own patented ABO compression is also used. The software needs a dongle to use compression, so I can't test that part.
In MatrixView's new statement they only acknowledge that their commercial software don't contain open source codes. But it looks like MatrixView used open source code in their old EchoView version (maybe test version) and "forgot" that to mention in their November 2004 response to me.

Adam Clark December 15, 2004:
Media World deputy chairman John Tatoulis cut Mr Clark short at one point, yelling from the crowd of 40 that Mr Clark was "a pathological liar" who had been given ample opportunity to prove the technology worked.

This is strange because John Tatoulis was end 2000 with his nose almost in the very strict and well setup test environment when Terry Corall Project Leader Australian Telecommunications Cooperative Research Centre (CTIE) Monash University was successful testing that Adams Platform did all the video compression and streaming over a 2400 baud modem line as Adam Clark promised. Has John Tatoulis suddenly a memory loss?

Arvind Thiagarajan November 23, 2004:
I received this link and this one, it's a website what try to proof Matrixview ABO compression is a scam. A friend of my tested the algorithm described in their patent and concluded it's working. Also Ernst & Young wrote a Echoview test report and Dataview test report what proof their image compression is better than JPG and JP2 and lossless. I contact Matrixview and their response was "I can confirm that none of the reports on geocities are true and we do not use those standard modeling techniques as suggested by the websites. Had we utilized those modeling techniques, we would never have been able to achieve the kind of results as stated on our website."

From all the inventors listed on this website, I was only able to find some information that maybe Madison Priest and David Kim Stanley (Michael Fenne) fraud their technology. Madison Priest possible fraud is strange because his wife helped him, but noticed fraud only after long time working together. David Kim Stanley (Michael Fenne) fraud his name, I didn't find technology reports only about a stripped Microsoft's Windows Media Player, often people want to see a working demo before investing. Only Lee Wiskowski says that he sent a friend with expertise in video compression and telecommunications to check out Pixelon's technology. The expert, whom Wiskowski declines to name, was so impressed that he invested in Pixelon prior to the private placement. Wiskowski decided Pixelon was too hot to pass up. "We were seeing them come up on MTV and VH1 [Web sites], bumping multimillion-dollar companies like RealNetworks", Wiskowski says. I only found this PDF document with Pixelon claims, if somebody has information that a court said one of the inventors is guilty for fraud or better if an inventor confess and/or explain his/her fraud, contact me.

Adam Clark September 24, 2004:
On Wednesday it also emerged that technology specialist The Tolly Group had reviewed its favourable report on the technology, saying Mr Clark had deceived it.
Mr Clark yesterday told The Age that The Tolly Group's backflip was a "major contradiction".
"I am trying to get onto (Tolly Group chief executive) Kevin Tolly to talk to him about it and he hasn't been taking any of my calls," Mr Clark said.
"I will fight to make my technology see the light of day and I will fight to protect the shareholders," he said.
"A lot of people don't want to see it get the light of day and I don't know what's going on there ... you've got to ask yourself the question, what is going on?"

Yes, what's going on that only Fairfax Digital news papers smh.com.au and theage.com.au cover Adam's story, are all other world wide news papers censored?
Where are the independent journalists, who really investigate what's happening, to be sure that nobody behind the scenes is trying to change history.