Author Topic: Vectorized Lists  (Read 803 times)

0 Members and 1 Guest are viewing this topic.

Offline z64555

  • 210
  • Self-proclaimed controls expert
    • Minecraft
    • Steam
Prereqs: Watch this talk by Google's Chandler Carruth

What I call a "vectorized list" is a list that's implemented within a vector or array container, and thus can be cached on the CPU if the list is small. I first heard of the idea from that very video, and was excited when I figured out how to implement it. (excited enough to write this post, at least)

As a quick refresher, lists operate by having their elements contain a reference (or link) to another element that's next in line. Doubly linked lists have both a forward reference (next in line) and a reverse (previous in line) reference. In order to access a specific element, you must iterate through each of the members. Thus, random access is slow, but you can insert/remove an element anywhere within the list far quicker than it would be to do so in an array or vector. Another disadvantage lists have is that the elements can potentially span the entire memory space, disrupting CPU caches whenever the next list element is not within the cache. A worst-case scenario with lists is when each iteration between elements is a cache miss.

Cache misses with lists can be mitigated if they are restrained within a contiguous memory block, such as when they are in an array or vector container. Even though cache misses are still possible with large lists, the issue is less problematic than it would be if the list elements were scattered in memory since there is a higher probability of a cache hit. Should the data be significantly huge, the links could be held in a separate array/vector for faster iteration/seeking at the possible cost of a slower access time.



FreeSpace already has an example of a vectorized list with a fixed size, just take a look at object.cpp. The object class has two pointers in it that point to other objects, and all object instances are contained with the Objects[] array. There are three notable object instances that are not within the array, which are obj_free_list, obj_used_list, and obj_create_list.

These objects are in fact heads to lists within the Objects array, and have a specific function within the object management code.

    obj_free_list
    • is the head of the list of objects which are unused,
    obj_used_list
    • is the head of the list of objects which are in-use, and
    obj_create_list
    • is the head of the list of objects which are in the process of being created

For FSO's purposes, the links there serve to speed up execution time when it needs to do iterations across Object[] for its various tasks. For example, when doing physics and AI routines, you'd iterate over the obj_used_list and not worry about needing to check if the instance is used or not. Likewise, if you are creating a new object you'd use the first element on the obj_free_list, init a few things, then pass it to the obj_create_list for some potential parallel processing goodness. (As far as I'm aware, FSO doesn't do a parallel process here, it allocates objects first then proceeds to create them, but it is entirely possible for the two to run in separate threads).



To get a vectorized list with a flexible size, you'd use a vector instead of an array as the container, and then implement the various management functions. Of particular interest is the resize() function:

    resize() - grow
    • Easiest to implement. Resize the containing vector to a larger size, then quickly add the new elements to the unused_list. (If a move is required, the list pointers will need to be refreshed)

    resize() - shrink
    • Somewhat difficult to implement. You'll have to defrag the vector so that the unused elements are the only ones lost. This can be quickly done by creating a new contiguous memory block and then moving the used elements into the front of the vector, and then refresh all of the list pointers within them.
    • If you wanted to shorten the used list like what std::list::resize() does, you'd have to identify and skip over the used elements that would be deleted. This could be done at the first stages of the shrink, by iterating through the used list to the tail that'll be removed and move it into the unused list, or just simply stop once a counter reaches the new size of the list.
« Last Edit: January 23, 2017, 11:39:36 am by z64555 »
I'm on Facebook! sort of. Zeesixtyfour Fivefiftyfive

-=wxFRED2=-
R.I.P. Oliver
------------
EveningTea: Time to go Freeman on this cultist..
* EveningTea pulls crowbar off his shoulderstrap and charges screaming incoherently across the marsh *
------------
z64555: bro. do you even salad
------------
z64555: suprise double quaternion!

  

Offline z64555

  • 210
  • Self-proclaimed controls expert
    • Minecraft
    • Steam
I was thinking it may be possible to upgrade bmpman to use this sort of structure.

Code: [Select]
struct bmp_slot {
bmp_slot* prev;    // For lists, link to previous element
bmp_slot* next;   // For lists, link to next element
size_t size;          // Nonzero size of the memory block, if this slot is the head. Otherwise zero.
// ... And all the rest of the data
}

Animated textures require a contigous block of bmpman slots, so instead of managing individual slots, the bmpman code will be working with blocks of them.

This would work like so:
The used_list and free_list (and create_list, if necessary) would address slots by the block, instead of the individual frame.

Initially, bmpman would start with one large empty block. As blocks are used up the initial free block becomes smaller. As blocks are freed, they are linked to the free_list and merged with other free blocks if they are adjacent.

Block merges may only occur on free memory.

Blocks can check for adjacency by comparing the distance between the block head's locations (pointer arithmetic, or ideally with std::distance) to see if it matches the size of the lowest slot in memory.

Merging blocks. The lowest block in memory becomes the head, with its "next" point set to the higher block's "next" value. The size is added together, and lastly the higher block's values are nullfied.

Splitting blocks. Bmpman advances to where the split will occur and makes a new head there. It references the old head as its previous, and the old head's next as its next, and its size is calculated as the difference between the distance to the old head and the old head's size. The old head's next is then set to the new head's location, and the old head's size is reduced accordingly. The old head can now be used.
I'm on Facebook! sort of. Zeesixtyfour Fivefiftyfive

-=wxFRED2=-
R.I.P. Oliver
------------
EveningTea: Time to go Freeman on this cultist..
* EveningTea pulls crowbar off his shoulderstrap and charges screaming incoherently across the marsh *
------------
z64555: bro. do you even salad
------------
z64555: suprise double quaternion!