Hard Light Productions Forums
Modding, Mission Design, and Coding => FS2 Open Coding - The Source Code Project (SCP) => Cross-Platform Development => Topic started by: WMCoolmon on July 26, 2005, 04:11:45 am
-
All nicely packaged and with license/gen. description added to files. Plus, it even includes a README! :p
-
Realized a small buggery with the makefile. This:
@echo Install finished; use "sudo make uninstall" if you ever wish to uninstall
should be this:
@echo Install finished\; use \"sudo make uninstall\" if you ever wish to uninstall
Not a big problem, unless you have a command named "use" that instantly reformats your hard drive or something. :)
-
You really need to make it "run `make install` as root" or something. As it is the user in question would have to have sudo setup to allow make to run privileged, something it probably won't be by default. I haven't checked it out yet though so you may actually have included a quick lesson on visudo usage. ;)
-
heh, I can count the number of times I've used sudo (or related tools) on the fingers of one hand justabout. I always either log on as root properly or use plain ol' su to achieve (much) the same thing. Anyone reasonably literate in Linux knows you run this sort of stuff as root and also knows the various ways to get there (and those that don't should shuffle off back to XP Home ;)) so most of the message is redundant.
-
In your next version of VPMage you should replace rar with tar.bz2 or tar.gz to add the real linux feel to it. ;)
-
Heh.
I just realized that it seems to be saving rather-corrupt files...I suspect this is due to AMD64, I'll be looking into it...
-
The time/date in the header for files has to be 32-bit. time_t is a long so on 64-bit platforms (not including Win64) it's 64-bit. Just make sure it's 32-bit and you should be ok.
-
Yeah, that would explain it. Actually, there are a whole host of sizeof() calls in a subfunction that have no business being there.
Edit: Updated file will be available later, there are some more tweaks/fixes I've made and I may add some other niceties to the lib.
-
Hi there
First, good job!
Second, I had one hour free time now and I added support to vpmagic for autoconf/automake/libtool. The library (libvpmagic) is still built with "noinst_" (so it's not installed on make install). This was done because I had no time yet to see if those codes are really self contained and so on.
I have posted the tarball resulted from a "make distcheck" (one of the sexy advantages of automake) at: http://dizzy.roedu.net/dizzy.roedu.net/fs/vpmagic-0.1.tar.gz
Next time I will have some free time I will look into really using the autoconf provided #defines by conditionaly including some of the includes (or maybe not, I think the codes can be made very much to just use the provided ANSI C++ standard library and as such have almost no unportable depencies).
The advantages of automake are now:
- automaticallyu can build your library (in static/shared version) and nicely install it along with the headers
- automatically compile test the codes (make distcheck) and create a release tarball
- automatically create and use dependencies (so you need not manage those includes dependencies in the Makefile's as the old ones do)
The one big disadvantage is while automake as been recently ported on Win32 it still largely depends on a powerful command shell on the system so it usually works only in POSIX enviroments and similar (like Mac OS X, BeOS). We could use "scons" (www.scons.org) if you need very good win32 support out of the box too (or without any other build project file that is).
Now about the vplib API. I really don't like it. The thing is, in general, when designing an API always try to use some existent one if there is already one old, used and proved good for the same setup. In the vplib case, vp files are just some virtual filesystems packed in one big file, as such I think a better API is a filesystem API.
So I would make 2 APIs for vp file access (one in C for C or C++ users and one in C++ for C++ users)
1. the C API I would make it like stdio API, meaning vp_open, vp_close, vp_read, vp_write, vp_seek and everything should be done transparently in the library (the actual positioning in the VP file etc, also the write case is a little bit more complex but can be done too with transparent interface like that); for directory structure scanning and access I would make it like POSIX or Win32 that is: vp_opendir, vp_closedir, vp_readdir (very similar to POSIX ones and pretty close to Win32/DOS that has findfirst, findnext family of functions);
2. the C++ API I would make it based on the existent C++ file API (ie ); so I would extend ifstream and ofstream to support exactly the same API but transparently using a VP file instead of the real filesystem; this means that all of the sudden you can reuse all your codes and learned knowledge on using VP files (you could even use the <<, >> operators), no need to learn YET ANOTHER API, yet another way to use something; for directory scanning (as there is no directory scanning fucntions for ANSI C++ library) I would must make some C++ wrappers on the C directory scanning interface from above
So what do you think ? :)
-
In any design, be sure to include future support for CVP. CVP is basically a standard VP compressed with zlib. Other than some basic ideas in my head nothing exists for that yet so just keep in mind that it's coming, but I have no idea what it's going to end up looking like.
-
Oh I see, good idea, I too was thinking yesterday that having some sort of compression in vp should be cool. Well, one thing for sure is that the version will change (instead of version2 will probably be version3 or whatever) the other should be depending on how we decide to compress:
- we may compress per file (the easiest solution to implement, especially with a transparent filesystem API like the above proposed one, but has the drawback that you may NOT get the best compression ratio because compressing per file might not see the best patterns on the whole file)
- compress the whole "data area" (the area of VP where the file contents are one after another); this should provide maximum compression rate but it makes it harder to implement (hmm now that I think that in any case I wouldnt have supporting updating just a file inside the VP but instead create all the VP from the beginning then it wouldnt be that complex at all)
Another issue is that we could also compress the index as with big VPs it has some size and it should compress a lot (this happens because the index format is somewhat not very optimal when it comes to disk usage because it stores the filenames in a fixed size char array instead of doing a variable size format).
Any hints ? :)
-
It sounds like it'd involve a total API redesign. :p
The reason I did it how I did is that I wanted an easy-to-use API for VP management...that is, moving files about. Generally for VP management you don't need lowlevel functions like fopen, and implementing that really would've been more work than how I did it because files are always stored in complete formats.
And actually the way the library has grown is that I've added functions as I've needed them, and in the way that seems the most flexible while still easy to use and being fairly speedy.
For a program like POFCS, vp_write and such would make more sense because it would allow you to abstract file I/O whether a file was simply part of the native filesystem or in a VP archive.
The other thing is that I wrote it to be easily portable between virtually any compiler. Meaning no extensive make or project files...IMHO that just encourages needless complexity in a library of this size. Even if I could write a makefile with a half-dozen configurable options, I wouldn't - it's not worth it.
Compression and encryption with the current lib would need hooks in load_vp, extract_file and (I think) build_vp and add_file, update_file, and load_file (the last three would be for adding a flag to specify if you want the file compressed or not.)
The attention to the library is appreciated, but IMHO you're focusing too much on making it standardized rather than what's most effective for a VP tool of any sort. But for editors like POFCS or, ehm, something else :p what you're saying about coding it C/++ standards style make sense.
What I will say is that I have never really liked the >> operators because they seem too ambiguous, which is why you may notice that I've never really used them in any code I've released. eg, if I hand them a string, will they simply store the string? Will they null-terminate it? Will they add an int that tells the string length? Or will it be a short? I couldn't tell you without experimenting...all I can say is that if you were to take the output, and then use the << operator, you'd get the input to the >> operator. Assuming you used the same type, of course, otherwise all bets are off.
Edit: The most confusing thing is the add_file, update_file, and load_file trio, but that actually isn't too bad. The rest of the names are pretty obvious - extract_file extracts a file and build_vp builds a VP. I doubt 'learning' the API would take much more than 5-10 minutes. Using it would take less time, esp. with an example.
-
I've got my head wrapped around other things for a while so I don't want to get into this conversation too far just yet. The basic things to be aware of though are:
- many files that we use now are already very compressed (DDS, JPEG, OGG, ADPCM) so ship textures, most EFFs and the new ANI replacement and many sounds aren't going to compress well anyway.
- we have to be able to stream out of a VP to allow for music and, later, movies in a VP/CVP. we need to avoid decompression overhead as much as possible.
I was going to figure out one of two things to settle all of this:
- per file, type based compression. We would compress by file and simply avoid the overhead associated with compressing already compressed file formats.
- sorted data blocks in the CVP. A CVP would contain two data sections, one compressed and one uncompressed, sorted by type to be most efficient.
You can probably figure out the details on what I was thinking from that. I need zlib for something else which is one reason that I has chosen that (plus it works well and is portable and has a BSD type license).
-
When you say stream, though, why not do it like so (presumeably something like this would be in extract_file):
char* data_buf = new char[chunk_size];
for(int i = 0; i < 0; i += chunk_size;)
{
fread(file_pos + i, chunk_size, 1, sfp);
decompress_chunk(data_buf, chunk_size);
fwrite(i, chunk_size, 1, dfp);
}
Obviously that exact cod won't work, except with a compression ratio of 1:1 :p, but it should get the general gist across.
CFile, I believe, can 'stream' out of VPs already...I never intended my library to replace the FS2 system though (Possibly the only time that may have happened :nervous:), so the fs2_open code would have to be directly modified in any case.
I need to get to wake up tomorrow so I can't look up zlib now...but the basic code snippet above would work for OGG.
-
Oh, you will have to excuse my energy, you should know by know is "beginner"s energy when everything seems that it can be improved, reworked, redesigned. All this until the person actually starts doing them and the time and energy consumed takes it's toll, so please bare with me :)
Now, yes, I agree, the current build system is quite simple, but the library might develop into something a little bit more complex as such a more powerful build system comes in handy. Other reasons is that many IDEs (such as kdevelop) have built-in support for automake so importing and working with automake based projects in such IDEs is only fun to do heh. Other advantages are those enlisted in the previous reply (distcheck, automatic release tarballs, etc).
Also (as you have noticed already) I give A LOT of importance on coding style. I do NOT consider conding style more important than the code actuall functionality, I consider them EQUAL. That is I never make something that just works (or at least try to) or make something that just looks nice :) I give them both exactly the same importance so I try to make either both of them or none.
About the API, you are right that because I really dont know much about FS specific VP file needs my sugestions might look a little bit offtopic. I agree that the main usage for the library whould be FS and then this tools, packe/unpacker and the tool you said POFCS. I will need to investigate more into this (or maybe someone else can say that already knows it) what are the needs of reading/writting to VP files from this programs ?
About ANSI C++ I always compile my programs with "-Wall -std=c++98 -pedantic" :). Now the <<, >> operators are for formatted input/output of some data types into ASCII encoding. For unformatted (writting/reading chunks of bytes) one has other specific methods in iostreams. Speaking exactly about VP, because we don't store ASCII formatted integers or other binary data then the fact that a iostream interface to VP files provides also access with << and >> is kinda useless :) But still there are other advantages (besides that people can reuse their knowledge on iostreams API to use it on VP files). For example one principle I apply in C++ programming is "resource aquisition is initialization", this basically means that if you are aquiring some resource (memory allocation, locking a mutex, opening a file) then you should generally do it by the initialization of an object (make wrappers for your resources if you dont have that). In the iostreams case it means that always open files as ifstream or whatever because in that case when you exit the context of where the ifstream object is defined, then automatically the file is closed and the resource is released. This is actually A LOT MORE important that it sounds at the first glance. To give a specific code example:
in C/C++:
open_vp(filename)
{
FILE *fd = fopen(filename);
if (!fd) return -1;
if (readheader(fd)) {
fclose(fd);
return -1;
}
if (readindex(fd)) {
fclose(fd);
return -1;
}
return 0;
}
This is code duplication and error prone. The way I would do it in C++ is:
open_vp(filename)
{
ifstream fd(filename);
if (!fd || readheader(fd) || readindex(fd)) return -1;
return 0;
}
That's ALL! Because ifstream destructor will be called when exiting the stack context and the file gets closed. In general this principle is very useful and I consider it one of the most important advantages of C++ (if I couldn't do this then I would probably not program at all in C++ :)).
Anyways, first I have to check out exactly what needs does FS and the tools have on VP files and see if it's really worth it the new API access method.
-
About streaming, one other solution to reduce decompression latency is to have transparent "readahead decompression" and buffering in the library. You cycle on calling vp_read() but this in turn uses a buffer of some size and decompresses only when that buffer is overread (it will decompress again a chunk for that buffer). Also to further reduce the decompression latency I would also add a block buffer layer in the VP library to cache on a LRU policy decompressed blocks :)
-
Look in the modding forum for the new POFCS stuff. Kazan has his own VP library (there are at least five different sets of VP functions running around in various places on the web, go figure). However POFCS is apparently online on the sourceforge CVS page.
Also, one idea I was throwing around at one time was 'fragmented' VP files...these would be VP files that weren't necessarily 100% filedata, and might have some empty/unused space as files shrunk or were moved to the end because they grew and there wasn't any empty spaces large enough for their new size. The benefit would be that one could quickly modify VP files. (No need to move all the file data after a fie was modified). However, I ditched the idea because it could spur the development of crappily inefficient mod/campaign releases because nobody bothered to defrag them - and if someone's modifying a VP that much, the data probably shouldn't be in one in the first place.
Might come in handy for something though.
-
Originally posted by WMCoolmon
Obviously that exact cod won't work, except with a compression ratio of 1:1 :p, but it should get the general gist across.
I'm not saying that it can't work like that, just that it shouldn't. I'm trying to avoid the overhead of decompression and wasteful nature of compressing compressed files. That's one of the reasons that I've said before that I didn't like the idea of a compressed VP format. With a game like this and with it's memory requirements, anywhere we can save is a good thing.
CFile, I believe, can 'stream' out of VPs already...I never intended my library to replace the FS2 system though (Possibly the only time that may have happened :nervous:), so the fs2_open code would have to be directly modified in any case.
Yes, but then it's easy since it's really just streaming off of the disk. The decompression will all be built into CFILE so it will be completely transparent, just like using a file in a VP is now. I don't want to take the easy road on this one though. I want something that works the best and is easy to package. If you've got something like TBP with a 650 meg VP it can take a while to (un)compress that thing. As more already compressed file formats get stored in there the time vs. ratio doesn't add up anymore.
And I was going to rewrite CFILE to do this. I just wanted to mention that it was going to be done so that your lib could be ready for the change and not need a complete rewrite to fit this capability in.
-
If you are going to compress the whole data area (to get the maximum compression ratio) you will have problems with selectively reading some files. That is getting to the point where some specific file data begins. Before thinking of any solutions to make this fast I think we should first check out the algorithm as I think it is "a block compression" algorithm and as such there might be some properties of it that we can use the speed up seeking for a specific byte in the compressed data stream.
For reference I think we should read this: http://www.gzip.org/zlib/rfc-deflate.html
-
[my comments on this thread]
compressed VPs are a complete and total bad idea: the purpose of the VP format is to save time and space on disk access because you can cache it's FAT (speed++) and it's an agreggation of a bunch of smaller files it ha less FS overhead --- compression would more than completely destroy the speed advantage
replace-changed-file on the fly would be more than slightly dangerous -- i could easily code this: the thing is I don't care to because it the only time you could do this [safely] is when the file is the same
my VPCS 2.x should compile and run on linux just fine - just like PCS 2.x
that was most of my purpose is starting the 2.x branches in both applications. Once they're both done i'm going to pack'em togeather (i'll probably add some features to VPCS 2.x) in one installer for windows and ship em togeather.
I need someone to prep autoconf/automake/etc for linux
-
I can try to make the autotools stuff for VPCS. Just give me a URL to your latest version.
Other thing, would you consider unifying all this VP management codes? It's kinda useless to have all this codes through all these different projects. It makes maitenance a living hell (if not for the VP file very simple structure I would say there are probably bugs in one VP implementation that are not in another one and so on, instead of having a common library for this where all the bugs should get fixed). This also allows someone (or some persons) to focus just on this, specialization is good thing(tm) :)
About the VP archive speeding up FS access, hmm, I am not sure, but if FS game is loading all needed data files before starting a mission you mean it would only speed up this loading step (there is also the streaming thing, but that should work the same in VP, separate files, or compressed VP mode because it means streaming from a single file content).
Hmm, I have not much experience with windows filesystems but on what I'm used to (Linux) with filesystems like reiserfs this shouldnt be a real issue. Ok, the index access should be a lot faster (being all in memory) but this also means the index always has to fit in memory (ie has to not have huge size or so). Another thing, actual reading of the file contents shouldnt be faster with VP than with a normal filesystem because you have the same reason why it can be slow: fragmentation. Both in normal filesystems and VP, loading a sequence of files can trigger loading of blocks from the disk from random position, thus slowing this operation because of disk latency issues. But, all this is just theoretical talk, a benchmark should prove this quite easily :)
-
http://cvs.sourceforge.net/viewcvs.py/alliance/vpcs2/
Unifiying code: i already have - all my apps use my VP implementation
I was the first one out there with a serious tool for making VPs -- VP View read before me, but I wrote first and DM tools are written in MFC *barf*
I'm not rewriting four of five programs to appease someone's unification dream - and like hell i'm standardizing ANYTHING on an STL class -- they're decent classes but you'll never see me use them as parent classes.
It's faster to search thruogh an array in memory (which you can load the VP FAT into) than iterating through a list on disk.
"compressed" VP mode would
NTFS is more liek a big database than a filesystem -- lotsa overhead because of that.
Fragmentation of one large file has less impact than a hundred smaller files being fragmented.
-
Nobody is telling you what to do, those are sugestions based on arguments and we can talk arguments. This are technical decisions and as such should be based strictly on technical arguments. Many brains always thinkg better then a single one.
I will look into adding autotools support to that. I will also look through the API and make my sugestions over it (if any).
Having unified codes for reading/writting VP is better than having dozens separate codes (this applies to any set of codes that is suposed to do exactly the same thing), because bugs get's fixed in one single tree and because working on a single tree concentrates more brainpower than having people working on separate trees. All the other codes might just end up using your API if that's concluded in the end but still this would be unification :) I was hoping from you to be more opened to this type of discussions but I guess you already had a lot of talks other this things before long before I started to look over this and as such my sugestions can sound as wasting time.
Do you have any technical problems with the C++ standard library ? Trying to program according to a standard (by using it's features and design your own codes on the similar APIs) has advantages because that's why standards exist, to not have people reinvent the wheel all the time and all "speak another language". Of course I am in no way a promoter of "always program using the C++ standard library way" if that is technically proven to not be so good :)
I was not talking about fragmentation in the VP file when reading a single file but about the I/O seeking involved (the reason why fragmentation is bad in general) when doing reading of some set of files no matter if from filesystem or from VP. Because this list of files doesnt need to be in any way sorted that it turns out to have in the VP file the contents one after another. They can be totally random. As such when reading this set of files (say you have 1000 random files to read from VP or from the filesystem) you might get almost the same latency because of I/O seeks. It also depends on the size of files, if most of the files read from VPs are as such that reading from from a filesystem whould stress the OS block cache more than reading them from a single file (ie you get more hits from that cache when reading from a single file) then again using a VP is better.
In general, I am not trying to prove that VP is slower. It is very clear that IN GENERAL it is faster. But also, until someone comes with exact benchmark information I can argue that this difference might not be very big, that's all :)
About the argument that compression whould eliminate VP speed benefits I tend to disagree. Depends on how you program it. For example if we go by the way to compress the "data content" as a whole then a problem whould be to seek into a specific file in that compressed stream. Now because "deflate" compresses in blocks of 32kbytes size this means you can use in the index isntead of the direct offset in the VP file, an offset to the starting of the compression block that contains the beginning of that file and an offset inside this block as if it were uncompressed to know EXACTLY where the file starts.
So trying to start to read a specific file from the compressed VP file whould mean seeking to the beginning of the compression block that contains the beginning of the file in the uncompressed form, uncompress it and then read the file data from where that file starts inside that uncompressed block. Then the rest of the file content is read just by continuing to read the next compression blocks (so which are continouse to this block in the VP file) and uncompress them until reach the file end. So as you can see with this design there are no additional I/O seeks, there is only ONE seek (just as it were in the uncompressed case). The only additional cost I can think of here is the decompression itself. But because (as far as I can tell from reading my system's I/O activity when loading a mission) this reading from VP is I/O bound then the additional CPU cost of uncompression should really get unnoticed and if programmed well should not introduce additional delays.
-
Originally posted by dizzy
I will look into adding autotools support to that. I will also look through the API and make my sugestions over it (if any).
ty
Originally posted by dizzy
Do you have any technical problems with the C++ standard library ?
yes - there are compiler portability issues, severe performance issues and it's impossible to debug into the STL and see anything useful
Originally posted by dizzy
Trying to program according to a standard (by using it's features and design your own codes on the similar APIs) has advantages because that's why standards exist,
however coding to standards for the sake of coding to standards is inherently stupid - VPs are NOT anything like ANY of your standard streams - trying to treat them as if they were is inefficient at best
Originally posted by dizzy
to not have people reinvent the wheel all the time and all "speak another language". Of course I am in no way a promoter of "always program using the C++ standard library way" if that is technically proven to not be so good :)
Writing a class that treats a file in a manner that is SANE to it's internal formatting is not "reininventing the wheel" - attempting to treat a VP like it could be class vp : public fstream is making a square wheel and calling it perfect
the file listing in a VP is structured in a specific manner - files are sorted alphabetically (case sensative) inside their folders - and i believe folders are sorted alphabetically as well, i can go back and check this.
Originally posted by dizzy
About the argument that compression whould eliminate VP speed benefits I tend to disagree. Depends on how you program it. For example if we go by the way to compress the "data content" as a whole then a problem whould be to seek into a specific file in that compressed stream. Now because "deflate" compresses in blocks of 32kbytes size this means you can use in the index isntead of the direct offset in the VP file, an offset to the starting of the compression block that contains the beginning of that file and an offset inside this block as if it were uncompressed to know EXACTLY where the file starts.
A) you have no way to sort that block in the VP - there is NO ROOM for it in the FAT - we would have to either break support or calculate the block and seek to it's beginning instead of know the position of the file and seek to it to read
B) You have to DECOMPRESS the bloody file
Effect(A): Break Compatability, or Increase CPU Costs
Effect(B): Massive Increase in CPU Costs
Effect(A+B): Requires MASSIVE record of the filesystem module of fs2_open
A = Inacceptable
B = Inacceptable
A+B= You're out of your mind
I FLAT OUT REFUSE to support ANY type of compression in VP files as it is antithetical to the design philosophy of the file and it's usage
You seem to be blissfully ignorant of the CPU costs of decompression, the delacacy of the filesystem module in fs2_open, the purpose of VP files.
Reading from a VP:
* Open File, Seek to fat, bitblt fat to memory
* (Close file if you wish)
* Find file in FAT
* (Open File if you closed it)
*Seek to File's offset
*Read File (open i just bitblt small files like textures into a memory block and do everything in ram)
Reading from a theoretical compressed VP
* Open File, Read Header, Seek FIRST BLOCK that contains fat, decompress it: find the beginning of the FAT in that block, start reading it - decompress any more blocks you need
* Store fat in memory
* (Close file if you wish)
* File file in FAT
* (Open file if you closed it)
* Seek to first compression block, decompress it, find beginning of file, read on decompressing any additional blocks you need
---------------------
Compression VASTLY increases EVERY form of overhead, breaks compatability, serves no real purpose
[edit again]
oh.. and most of the file put in VPs isn't compressable because the file type itself is compressed data (.dds, .jpg,. png) or is uncompressable data (POFs aren't really very compressable.. )
uncompressable/already compressed data accounts for probably about 80% of the data in a VP
-
Try not to get annoyed at Kazan, dizzy... whenever he gets in a discussion, whether his points are valid or not, he tends to come off as rather brash. :)
-
Originally posted by Kazan
my VPCS 2.x should compile and run on linux just fine - just like PCS 2.x
Sorry to say, but no. You are using several Windows specific commands in there just like you tend to do with most everything you write and don't provide a replacement, or even provide notice, that said functions are Windows only. GCC likes to hate how you do some C++ things too. It's just like the fs2netd server which took me all of 20 minutes to make Linux friendly but over 5 hours to find replacements for the DOS specific commands.
-
Originally posted by taylor
Sorry to say, but no. You are using several Windows specific commands in there just like you tend to do with most everything you write and don't provide a replacement, or even provide notice, that said functions are Windows only.
A) what functions am I using that are windows only
B) I don't provide replacements because I don't know what functions I'm using that are windows only
C) I would find out which functions these are when I get around to making sure it compiles in linux - however I am working on functionality right now and not compatability - compatability is a cleanup phase
Originally posted by taylor
GCC likes to hate how you do some C++ things too.
Yeah.. some shiat that doing the stylistically correct way makes MSVC *****.
I should get in the habit of writing on GCC then porting to MSVC - would be faster and easier: but I'm not using linux as my primary OS at home.
Originally posted by taylor
It's just like the fs2netd server which took me all of 20 minutes to make Linux friendly but over 5 hours to find replacements for the DOS specific commands.
what DOS specific function was I using?
[edit]
filelength(FILE *);
DUH
i forgot to remove that and replace it with a seekg/tellg
some of my older code that is getting "rolled up" into newer apps was written before my emphasis on platform agnosticism so until it gets cleaned some of it will contain old references.
feel free to do those cleanups anytime you find one that needs done
-
Originally posted by Kazan
A) what functions am I using that are windows only
B) I don't provide replacements because I don't know what functions I'm using that are windows only
C) I would find out which functions these are when I get around to making sure it compiles in linux - however I am working on functionality right now and not compatability - compatability is a cleanup phase
Mainly file acces stuff I think, like you said "filelength". I think that there was something else too but I don't remember off the top of my head since I've been too involved with the OSX version the past few days and can't really concentrate on anything else. It's nothing that can't be fixed but it is annoying that that you have to compile, fix, compile, fix all day since those things aren't marked and going line by line through the code looking for stuff isn't any faster. That stuff is pretty much always small though and it's the compiler errors which can take real time to deal with.
what DOS specific function was I using?
kbhit and something else. It's been months since I looked at any of that so I'm not sure exactly what all I changed. I eventually found a GPL'd replacement for kbhit that fit easily into what you already had coded and used it instead. That's stuff that never hit CVS though since I still don't know if it even worked right or not.
-
kbhit is from conio.h .. and why there was a reference to kbhit in a GUI app is beyond me :D
like i said - VPHandler was old functional code imported up into a new project
-------------
look at this much newer code
http://cvs.sourceforge.net/viewcvs.py/alliance/pcs2/FileList.h?rev=1.5&view=auto
(lol i just noticed i had two case-change functions in there from when it was used in another add with std::string - kaz_string has str_to_lower and str_to_upper in the class)
http://cvs.sourceforge.net/viewcvs.py/alliance/pcs2/FileList.cpp?rev=1.5&view=auto
-
Originally posted by Kazan
kbhit is from conio.h .. and why there was a reference to kbhit in a GUI app is beyond me :D
No I'm talking about fs2openPXO. It used kbhit which is old DOS functionality and by no means cross-platform.
-
Originally posted by Kazan
yes - there are compiler portability issues, severe performance issues and it's impossible to debug into the STL and see anything useful
however coding to standards for the sake of coding to standards is inherently stupid - VPs are NOT anything like ANY of your standard streams - trying to treat them as if they were is inefficient at best
Oki, actually at work we use STL on large projects (projects which handle tens of megabytes per second file processing and many clients at a time, so I'm not speaking about GUIs or something heh) and I wouldnt say it's impossible to debug... :) But anyway, I cant really prove this with words unless talking on a specific example as such I willl let my codes speak for themselves as I do make use of STL and the rest of the standard C++ library where that usage is good. Also about portability, I don't think FS2 being a C++ program should try to support systems that are not fully ANSI C+98 compliant, otherwise why don't we switch FS2 on C89 or something...
Writing a class that treats a file in a manner that is SANE to it's internal formatting is not "reininventing the wheel" - attempting to treat a VP like it could be class vp : public fstream is making a square wheel and calling it perfect
I wouldnt use "perfect" for any codes in the real world :) Also, I very much agree with your point, if this is the case here, which is why I'm trying to discuss about this, I would expect great chances to be wrong because I don't have much FS2 codes experience.
the file listing in a VP is structured in a specific manner - files are sorted alphabetically (case sensative) inside their folders - and i believe folders are sorted alphabetically as well, i can go back and check this.
Is this sorting some mandatory requirement ? Why ? :) (please bare with me...)
A) you have no way to sort that block in the VP - there is NO ROOM for it in the FAT - we would have to either break support or calculate the block and seek to it's beginning instead of know the position of the file and seek to it to read
B) You have to DECOMPRESS the bloody file
Here you got me lost... By FAT you meand the index in the VP file ? Presuming that you talk about the index and it really needs to be sorted I don't understand how sorting the index has anything to do with how the files are stored (compressed or not). In the end sorting the index just means reordering of the index entries but their values (like the offset to the compressed block that contains the beginning of the file) remains the same. You mean to have the contents of the file also sorted in the order found in the index ? What whould be the purpose of this ? :)
You seem to be blissfully ignorant of the CPU costs of decompression, the delacacy of the filesystem module in fs2_open, the purpose of VP files.
Well I've been saying that all along, no need for 3 replies to realise that :) Yes, I am TOTALLY ignorant of FS2 needs of VP files which probably means the main USE for them their main REASON to exist. As such I am arguing here so someone clearifies this for me.
Reading from a VP:
* Open File, Seek to fat, bitblt fat to memory
* (Close file if you wish)
* Find file in FAT
* (Open File if you closed it)
*Seek to File's offset
*Read File (open i just bitblt small files like textures into a memory block and do everything in ram)
Ok, this doesn't sound too far away of what I had in mind for FS2 usage even not reading those codes...
Reading from a theoretical compressed VP
* Open File, Read Header, Seek FIRST BLOCK that contains fat, decompress it: find the beginning of the FAT in that block, start reading it - decompress any more blocks you need
* Store fat in memory
* (Close file if you wish)
* File file in FAT
* (Open file if you closed it)
* Seek to first compression block, decompress it, find beginning of file, read on decompressing any additional blocks you need
Actually, compressing of the index (if that is what you mean by FAT) was just an option. My last proposal of compressed VP file didn't had such an option. So the index is still stored uncompressed. So the flow becomes:
* Open File, Read Header, read FAT in memory (if that is what you mean by bitblt)
*(Close file if you wish)
* Find file in index (or FAT)
*(Open file if you closed it)
* Seek to first compression block (offset to this is already in the index, so is just an I/O seek), decompress it (OK, some CPU load), find beginning of the file (the block has been decompressed at the previous step IN MEMORY (where else) as such "finding the beginning of the block" just means returning the data that starts there, and you don't need to "seek" it, you have the offset in the decompressed block stored in the index too)
So again, the only overhead I see here is the CPU involved by decompression. Because you still think that this is a huge overhead and I don't think so, I already started working on my own VP library that WILL have this kind of compression support and you could expect some benchmark numbers really soon :) Let the numbers speak for themselves...
Compression VASTLY increases EVERY form of overhead, breaks compatability, serves no real purpose
About breaking of compatibility I think Taylor's idea to have these files with their own extension (.cvp) whould solve that because people will see that either their tools has support for .vp only or also for .cvp.
About purpuse, well, can't say I see one big either... not like disk usage is much of an issue and as you also said many files are already compressed. However, there are a lot of "large" files in VPs that do compress a lot, people that distribute uncompressed vp files (ie non-zipped) should be hunted and killed :)
oh.. and most of the file put in VPs isn't compressable because the file type itself is compressed data (.dds, .jpg,. png) or is uncompressable data (POFs aren't really very compressable.. )
uncompressable/already compressed data accounts for probably about 80% of the data in a VP
Again I don't have much experience but take fsport VP files but for example fsport:
-rw-r--r-- 1 dizzy users 82120116 Aug 29 11:45 tango_fs1.zip
-rw-r--r-- 1 dizzy users 159327089 Aug 29 00:57 tango_fs1.vp
I would say that is a huge difference :) (happens on some other fsport VP files too).
In conclusion (if I'm allowed to do that :)), let's see how my VP lib project develops and wait for the benchmark numbers. Doesn't seem to be much in purpose for compressed VP files because their only purpose whould be to reduce used disk space AFTER INSTALLATION (that is, people who DISTRIBUTE uncompressed VP files should be killed as stated, this is different from the VP files you keep then in your FS2 dir which can be uncompressed) and I don't think there are many games out there doing that right now as disk space is very cheap. About unification of VP codes, I still think is something that proves good in time but if someone as experienced and influental as Kazan says that it's stupid then I'll drop it (for the moment:)).
-
Originally posted by Kazan
however I am working on functionality right now and not compatability - compatability is a cleanup phase
Ok I don't want to start another flame war or get too personal but I think this principle is totally wrong. If you ever intend (from the beginning that is) to have your codes working on multiple platforms then compatibility is NOT a cleanup phase but actually a DESIGN phase. Of course this requires good knowledge of things how they are done on multiple platforms and if this knowledge is lacking (hell I would be the first one to admit that I dont know a lot of things how are they done on Win32) then of course that things are done in the limit of the existent knowledge.
But, if one has knowledge about for example how to setup a timer on Win32 and how to setup a timer on POSIX systems, and this one person notices that the Win32 call has different arguments that do not exist/have no meaning on POSIX and that those arguments are NOT necesarry for the project the person is working on, then I think that person should do a wrapper call instead of a direct call with the common interface and do specific implementations for the specific systems. Whould you agree ? :)
-
the File Allocation Table in a VP has a VERY specific format in data and in ORDER
here is the header and the FAT table entry
struct Read_VPHeader
{
char signature[4]; //"VPVP"
int version; //"2"
int diroffset; //bytes from beginning of file
int direntries; //number of files in directory
};
struct Read_VPDirent
{
int offset; //from beginning of file
int size;
char filename[32]; //Null-terminated string
int timestamp; //The time the file was last modified in seconds since 1.1.1970 0:00
// Same as from calling findfirst/findnext file using any C compiler.
};
DIRECTORY data is stored in here too - in a sorta weird way.
FS2 reads the FAT sequentially and watches for "special" entries that it takes as directory change directives
[this is off the top of my head and i haven't rewritten a VP writer in several years - consult code and take it as the authoritative source in any discrepancies]
a "enter directory" 'directive' has offset = (offset of first file in directory), size = 0, filename = directory name, timestamp = 0
a "leave directory" `directive has offset = (offset of first file in the next directory), size = 0, filenane = "..", size = 0"
code note - VPHandler writes
//Filename[32] = 0x2E 2E 00 CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC [Hex Bytes]
for "leave directory" - which is exactly what [V]'s tool wrote
Here follows the example of a simple fat
----------------------
FILE { 16, 0, data', 0 }
FILE { 16, 0, 'effects', 0 }
FILE { 16, 1024, 'somefile', timestamp }
FILE { 1040, 126, 'fileb', timestamp }
FILE { 1166, 0, '..', 0 }
FILE { 1166, 0, 'tables', 0 }
FILE { 1166, 2048, 'menu.tbl', 0 }
-----------
[edit]
and data is packed
[16-byte header]
[file1]
[file2]
[file3]
[...]
[fileN]
[FAT ENTRY 0]
[FAT ENTRY 1]
[...]
[FAT ENTRY N]
-
Originally posted by dizzy
Ok I don't want to start another flame war or get too personal but I think this principle is totally wrong. If you ever intend (from the beginning that is) to have your codes working on multiple platforms then compatibility is NOT a cleanup phase but actually a DESIGN phase.
perhaps you should pay attention to nuance and context a little more closely BEFORE YOU ****ING FLAME
Cross-platform compatability is CONSIDERED FROM THE DESIGN STAGE - the "cleanup issues" is CLEANING UP THINGS THAT MAKE ONE COMPILER ***** BUT ANOTHER NOT AND CROSS-PLATFORMIZING OLD LIBRARIES THAT GOT IMPORTED UP INTO THE NEW APP
Originally posted by dizzy
But, if one has knowledge about for example how to setup a timer on Win32 and how to setup a timer on POSIX systems, ...
I'm using wxWidgets - NONE of my GUI code is platform-specific in my 2.x trees
-
i don't see the POINT of a .cvp - it's POINTLESS
it just SLOWS things down
i can get a 300GB Maxtor 5400pm 2MB cache ATA133 drive on pricewatch for $111 [inc shipping]
harddrive space = not a problem
-
Originally posted by Kazan
perhaps you should pay attention to nuance and context a little more closely BEFORE YOU ****ING FLAME
Cross-platform compatability is CONSIDERED FROM THE DESIGN STAGE - the "cleanup issues" is CLEANING UP THINGS THAT MAKE ONE COMPILER ***** BUT ANOTHER NOT AND CROSS-PLATFORMIZING OLD LIBRARIES THAT GOT IMPORTED UP INTO THE NEW APP
hahahaha, perhaps I should do that more often (ie read more carefully before replying) sorrry for that :):)
-
Originally posted by Kazan
i don't see the POINT of a .cvp - it's POINTLESS
it just SLOWS things down
Generic compression just to compress is pointless and slows things down. I'm in full agreement there and is why I hated the idea of a compressed VP because of it's "just compress it" nature. I want to do it right though and get compression with speed valued more that size and I guarantee that if I can't work out the speed issues to my satisfaction the code isn't ever going to see CVS.
You have to remember though that all of the Quake and Doom3 based games store their files in compressed archives. It is feasible to do in a large game. I'm not sure that same approach would work well for FS but something better is needed than the standard VP. I've currently got 4.7Gig worth of FS2 and related mods on my hard drive. That's insane. Hard drive space may not be a problem for you (or me for that matter) but it will be for some. Remember that we are supporting multiple platforms and on some of those it may not be so easy to just get a cheap hard drive for extra space. Something will have to be done eventually and though it may be a few months before I actually get working code, I'm hoping that CVP will be part of the solution to that.
-
well i can think of a MUCH SANER way to store the FAT in a packfile
struct fe_packhead // 16 bytes
{
fe_char filesig[8];
fe_int filever;
fe_int numfiles;
};
struct fe_pack_frecord // 256 bytes
{
fe_char filename[120];
fe_char direct[128];
fe_int file_size;
fe_int file_offset; //offset from the begining of the file
};
http://cvs.sourceforge.net/viewcvs.py/alliance/ferrium/FileSystem/FE_Pack.h?rev=1.6&view=markup
http://cvs.sourceforge.net/viewcvs.py/alliance/ferrium/FileSystem/FE_Pack.cpp?rev=1.11&view=auto
-
Originally posted by Kazan
perhaps you should pay attention to nuance and context a little more closely BEFORE YOU ****ING FLAME
Cross-platform compatability is CONSIDERED FROM THE DESIGN STAGE - the "cleanup issues" is CLEANING UP THINGS THAT MAKE ONE COMPILER ***** BUT ANOTHER NOT AND CROSS-PLATFORMIZING OLD LIBRARIES THAT GOT IMPORTED UP INTO THE NEW APP
Kazan, that was completely uncalled for. Not only was dizzy not flaming you, he was trying to avoid a possible flame war by prefacing his comments with "Ok I don't want to start another flame war or get too personal".
Yours is entirely unwarranted behavior for a member of the SCP, especially since he was trying to offer some helpful advice and especially since you were the cause of the misunderstanding. Cross-platform programming is part of the design stage, and if you're spltting hairs then it's no wonder your post was confusing.
-
Because someone disagrees does not mean someone is flaming.
I am afk this week, I will be back Sunday night.
Be nice.
-
Well, I don't mind having compressed VP support in any of my programs. Like you say, many of the filetypes used are already compressed, so it's not like VP files now are 100% dedicated to speedy accessing of files, or else using DDS, JPG, OGG, and all the rest would be disallowed.
And finally, VPCS was a total pain in the ass to use. If it had been a program that was usable to efficiently create VP files, I would never have written VPMage. Not to mention that you have a habit of abandoning your projects to start a new one. And a habit of reacting violently with people who disagree with you. So if you want to hold up progress by flat out refusing to support compressed files, that's fine; I'll just upgrade my library so people don't have to be limited by yours.
Finally, not everyone has or is willing to spend $111, so they can play some campaign or even fs2_open itself.
Edit: And, actually, CVPs would help multi-platform interopability. No more fiddling with platform-specific shareware programs to uncompress a file, or with a platform-specific installer, just drop the VP in a mod dir and go.
Although that does bring up a relevant point, would it be possible (if the VP format is upgraded anyways) to add the mod option I brought up awhile back? (ie the VP is only used if a -mod option agrees with the name stored in the VP file)
This would make installing mods a matter of simply downloading the .cvp to your FS2 directory. Support for some kind of meta-tag (eg parent VP, some kind of meta field) would make mod management via a user-friend GUI very easy.
-
Goober5000: prefixing comments with "i don't want to start a flame war" and "i don't want to get too personal" does not make them any less flaming - and I only got pissed because he was flaming me based upon statements I didn't make and positions I didn't hold - I was pissed because he ASSUMED
WMCoolman:
VPCS was "Abandoned" because I went inactive - i don't typically abandon my big apps like that - once VPCS2 is done I will be going back to revisit VPCS2 to finish it off [afterall it's still tagged RC]
-
Originally posted by WMCoolmon
Although that does bring up a relevant point, would it be possible (if the VP format is upgraded anyways) to add the mod option I brought up awhile back? (ie the VP is only used if a -mod option agrees with the name stored in the VP file)
This would make installing mods a matter of simply downloading the .cvp to your FS2 directory. Support for some kind of meta-tag (eg parent VP, some kind of meta field) would make mod management via a user-friend GUI very easy.
I do want CVP to be a complete upgrade/rewrite to VP and not just a compressed version. We just need to figure out what we want of a better VP format and starting making a list. I know that VP in a VP has been brought up before so maybe that's something to consider here as well.
I'm mainly looking to get these features done:
- compression in a reasonable way
- large file support (over 2gig, of the CVP itself, not individual files)
- dedicated (and easily manipulated) structure area for file and directory listings so that files can be more easily added/removed/modified without having the remake the entire CVP
Beyond that it's up in the air. Like I said though I'm not rushing into this. I've been thinking about it for a couple of months and it will probably be December before I start coding, even if that early. I'm looking at this as a from-the-ground-up rewrite so we might as well have it do everything that we want.
-
In that case let me add another thing that I think I suggested for Ferrium...usage-based location. That is, fs2_open records the number of times a file is accessed; this info could then be spat out on exit, if the appropriate command line parm exists.
Another program could then make use of it to reorder the file data so that most-often-accessed files are at the front of the VP. More advanced algorithms (ie to detect what order files are loaded for specific missions, and group them appropriately).
It would probably only have a performance increase on computers with slow hard drives, so I never was too adamant about it.
Something else I thought about was symlinks; how these would work would be that, once a given file is specified as being loaded into a VP, additional FAT entries can be created using the first one as a base. Whenever a file is deleted, the calling program checks for any entries that use the same file data, and delete it if no other entries exist.
AFAIK, this would require no changes to the way fs2_open does VP files (ie it could be done right now.)
And, another pipe dream, some kind of subpackage support. I'm mostly thinking of for the mediaVPs, although this could be expanded to mods and such, I suppose.
Basically, the ability to group files into a package, add a title, author, description, site, etc. Then enable/disable them on an individual basis using a GUI (checkboxes, like the flags thing in the Launcher as of now).
This would be under an "Advanced..." button, by default everything would presumeably be enabled.
This one seems a bit too extensive and difficult to code to be worth it, at least as of now, but it might be good in the future.
-
Ok, then Taylor I think I have this sugestions for the index structure at least:
struct idxentry {
uint64 dataOff; /* this can be 2 offsets if we support compression , one for where the compression block starts that includes the beginning of this file and one where in that decompressed block it really starts this file*/
uint64 child;
uint64 brother;
}
Which means that the offset to the data content of the file is a 64 bit integer (supporting over 4gb of VP file size), then you got an offset where is the child (if any, this happens to directories only), then where you got the next entry in the directory where you are in too. This structure allows editing of the index rather easily. If you need to add a new file just add it's index entry at the end of index and "link" it by writting the proper offset in it's parent or it's previous brother. The "link" itself can happen by a simply overwrite because you already have those fields of the parent or of the previous brother written in the index so you just overwrite fixed size already allocated fields in the index. Also moving a file (or a whole directory tree) is done just by again overwritting this kind of fields. For example for moving a directory tree you have to put NULL in the offset that was sending to the base directory of the directory tree that you are moving (which can be a previous brother or a parent) and then put the offset to the base directory of the moved directory tree where it is linked in the directory structure (to a previous brother or a parent). You got a lot of advantages with this aproach.
One disadvantage with this aproach is that after you randomly edit it by moving and adding entries all arround you might get a very "unordered" index if you are to unpack a whole directory tree at once. For this problem we can just support (in the tools and in the library) reordering of the index. In the end I don't see any requirement anywere that the index should be big enough that we should not store it completely in memory. If we store it in memory always then we can work out any solution anyways because we can make sure when the VP file is closed by the library, the index is then written and is written to any possible format (even the current one).
About storing the filename I see 3 posibilities:
1. store as it is now, totally included in the index entry of that file so you allocate a fixed size for it (so you got a max size limitation and waste of space for small names); this has the advantage thet it supports the renaming of a single file extremely easy
2. store it in a index entry but not as a field of maximum size but as a variable C-string (ended with \0) or as a Pascal-like string (store size and then the string) but still variable; this makes renaming of a file more complicated in the case that the new name is bigger than the old; in this case one solution whould be to add a new entry at the end of the index for the new index entry with the new name and "delink" the old one (but will leave "unused" bytes in the middle of the index)
3. store it as a variable string but at the beginning of the file data area; this makes renaming almost impossible :)
-
WMC: Moving files to the begining isn't going to read things up - a seek is a seek, whether you're seeking a +1gb index or a +10byte index
-
And Kazan: Everyone else is pissed because you reacted like a jackass. Knock it off.
Talk to me on IM.
-
hey taylor - i just realized i'm an idiot - i have KDE/cygwin on both my laptop and dev desktop
I could be compiling on there to make sure it'll compile cleanly in linux
[edit]
can i get that makefile?
-
Originally posted by Kazan
can i get that makefile?
I still need to get the #include's fixed in CVS but you can do that yourself when you try it first time. When you work out GCC's hatred I'll go ahead and start with the endianess stuff.
Here's the Makefile (basic copy of what icculus.org freespace2 uses):
CPP=g++
PCS2_BIN=pcs2
LDFLAGS=$(shell wx-config --libs)
CPPFLAGS=$(shell wx-config --cflags)
CPPFLAGS+=-D_UNIX
%.o: %.cpp
$(CPP) -c -o $@ $< $(CPPFLAGS)
SOURCES=./BSPDataStructs.cpp \
./BSPHandler.cpp \
./COBHandler.cpp \
./kaz_templates.cpp \
./pcs_file.cpp \
./pcs_file_dstructs.cpp \
./pcs_pmf_cob.cpp \
./pcs_pmf_pof.cpp \
./POFHandler.cpp \
./vector3d.cpp \
./wxCTreeCtrl.cpp
OBJECTS=$(SOURCES:.cpp=.o)
$(PCS2_BIN): $(OBJECTS)
$(CPP) -o $(PCS2_BIN) $(LDFLAGS) $(OBJECTS) $(CPPFLAGS) pcs2.cpp
all: $(PCS2_BIN)
-
awesome bro, i'll use that at lunch and commit it to cvs
-
Originally posted by Kazan
WMC: Moving files to the begining isn't going to read things up - a seek is a seek, whether you're seeking a +1gb index or a +10byte index
Not quite true. Firstly, searching the index, if the first hit is the file you're looking for then you don't have to look any more. Secondly, seeking, if you put the files in the order that you need them, then any caching the hard drive does will work in your favor, rather than be useless.
I am talking about the actual *order* of files, not the location of them relative to the other components of the file...although if the index were first, the same principle would apply, since the VP header certainly doesn't take up 8MB :p