This fixes an x264 dependency from the previous release.
For more details and previous releases, please see the main post.
I have been working on a new website at https://www.xmentimeline.com
About a year ago I embarked upon a journey to read all X-Men comics. I quickly found it was pretty hard to figure out which order to read them in, especially in comic issues from more recent years with all the crossovers, so I created this website to help keep track of it at the same time as upskilling my web development.
The visual layout on the page is chronological, but if you expand a comic issue you will see the expanded panel with arrows, and they navigate within the reading order which follows the internal chronology. If you have a Marvel Unlimited account you can also click through to read the comic from that panel, using the MU icon in the upper right.
There is also the ability to browse by collection, which can be toggled via the menu on the lower right, and it displays the collections in reading order but keeps them all together, whereas the default behavior is to split them up when it makes sense for the reading order.
The page is optimized to load only images that are visible on the screen, to cache heavily, to make use of several CDNs, it has an instant loading state on subsequent visits, and more, in an attempt to make the massive page performant, and I think that has been a success. There are roughly 2,500 comics and counting, and all of the data always exists on the page, so it has been an interesting challenge for optimization!
The tool is open source, feel free to check it out at https://github.com/SubJunk/TimelineTools
Suggestions and contributions are welcome.
It is built using a few dependencies like AngularJS and MaterializeCSS.
In the Universal Media Server project, we recently ran some benchmarks to discover the fastest way to read files, particularly big files like HD movies. We tested four methods using an automatic benchmark script:
We tested these on different hard drives with different rotation speeds, and with files from 600MB up to 22GB each, and using 1-100 threads to see what effect that had on the results.
We experienced different results but on average for our use case, we found that the two
FileChannel methods were the best, and went with the second option since the Path input is the newer syntax in Java. The
RandomAccessFile had significantly slow outliers that had been causing problems on some hard drives.
FileChannel using File input:
Benchmarking of hashing 152000 files using 1 thread took 57277 ms (376824 ns average per file)
Benchmarking of hashing 152000 files using 100 threads took 20130 ms (132437 ns average per file)
FileChannel using Path input:
Benchmarking of hashing 152000 files using 1 thread took 56675 ms (372867 ns average per file)
Benchmarking of hashing 152000 files using 100 threads took 21373 ms (140615 ns average per file)
DataInputStream using File input:
Benchmarking of hashing 152000 files using 1 thread took 75716 ms (498133 ns average per file)
Benchmarking of hashing 152000 files using 100 threads took 330825 ms (2176486 ns average per file)
RandomAccessFile using File input:
Benchmarking of hashing 152000 files using 1 thread took 51090 ms (336121 ns average per file)
Benchmarking of hashing 152000 files using 100 threads took 326446 ms (2147671 ns average per file)
For other results and more details, check out the branch with the benchmarking code
Also note that we were doing a specific type of hashing that is used by OpenSubtitles, which involves reading the beginning and end of the file, so other uses of the reads may give different results.