Jump to content

Massive help, either C/C++ or VB


Recommended Posts

Ok, I am in a crunch...

I need help in either C/C++ or VB

Things I need to be able to do:

Open a file as Binary

Find and identify specific strings in the binary according to length.

Find and identify specific strings in the binary according to pattern.

Modify strings in binary.

Create new temp files to hold binary.

place, in reverse, information.

Find in a string the first occurence of something, then 'stop'

You should get what I am needing now, lol

This is so I can try to make my software for my compression utility I invented (Works on random binary data, lossless, repeatable which is here for those who wish to see some of the exacts of what I am doing to help me better.)

TY ALL FOR HELP!

In fact, hugs, kisses, etc for all help! :lol::w00t:

Link to comment
Share on other sites


Do not delude yourself... it has been proven that random data is not compressible no matter what you do to it, unless you have removed information.

On examination of your descriptions, it seems you're just explaining VLE/Huffman process in detail.

I see no fruits in your efforts :lol:

Link to comment
Share on other sites

I agree with LLXX.

I am a electronics technician, having studied encryption and compression for a lng time. Truely random data is not compressable, and before you tell me it is I am going by many years of research by numerous "gods" of the compression sector.

Firstly before you say engineers dont know anything about compression, just to let you know its the engineering sector who are resposible for compression in the first place. Claude Shannon was the "father" of modern compression, he set down the rules we all must obey. By the way he was an engineer :)

Secondly for copmression to happen there must be some from of pradictability, or pattern to the data. All encoding methods require you to use the statistics of the data to encode it in compression, replacing the most common occuring pieces of data with the smallest compression codes.

Thirdly up until today there have been many claims that people have fond a way to compress random data, juts to fade into the background when asked to prove it. Part of the competition your are trying to win is to explain how you did it.

And finally just to throw a spanner in the works, as far as i am aware no one has found a more efficient way of encoding the final stage of compression than huffman (or if you wanted Shannon-Fano as it gives roughl;y the same result. its about 2% less eficien, creating some larger codes by 1 bit in some cases). So if you cant do it with huffman, how are you ever going to do it. (All compression schemes use different methods to reduce the data before huffman coding or run length encoding [but huffman is usually the method of choice], reducing the required dictionary for huffman coding)

May i also point out from looking at your powerpoint presentation, if you are successful in getting your program written the ststistical analusis is flawed. From the data given in the presentation all you will do is "compress" the data into a LARGER file size. Your coding scheme expands codes instead of compressing them, then you add a bit to notify if compression has happened or not. And from whta i can gather you are trying to balance the number of 0's and 1's so statistically there are the same number of 0's as 1's. Even if you run it through a converter to make the codes comply with Manchester coding or a similar coding method that only allows a certain ammount of 1's or 0's in a constant string before forcing a change, you will still end up expanding the data.

Maybe im wrong, but I have faith in over 40 years of analysis. (and as a technicain i am told to always question everything im told.)

Link to comment
Share on other sites

And finally just to throw a spanner in the works, as far as i am aware no one has found a more efficient way of encoding the final stage of compression than huffman (or if you wanted Shannon-Fano as it gives roughl;y the same result. its about 2% less eficien, creating some larger codes by 1 bit in some cases). So if you cant do it with huffman, how are you ever going to do it. (All compression schemes use different methods to reduce the data before huffman coding or run length encoding [but huffman is usually the method of choice], reducing the required dictionary for huffman coding)
Actually, arithmetic coding can be much more efficient than Huffman - since it can theoretically use 1.7 bits for a certain symbol whereas Huffman will have to use 2. On large files the difference adds up.
Link to comment
Share on other sites

I appolgise, i forgot about arithmethic coding.

You are right, arithmethic coding is theoretically more efficient and in practice usually ends up more efficient.

I was half asleep when writing the reply, but was also just trying to give a valid reason why it is fruitless to try and compress random data, especially in the maner in which he has explained in the presentation.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...