Commits


portable: fix-up sha1.h include -portable doesn't need this directly.


tog: override SIGTERM and SIGINT handlers to avoid ncurses cleanup() handler ok thomas


reduce GOT_PACK_CACHE_SIZE to 32, otherwise it uses too many open files found by tracey


ensure that all open basefd/accumfd get closed in got_repo_close() found by tracey


open temporary files needed for delta application in got_repo_open() This prepares for callers of got_repo_open() that cannot afford to open files in /tmp, such as gotwebd. In a follow-up change, we could ask such callers to pass in the required amount of open temporary files. One consequence is that got_repo_open() now requires the "cpath" pledge promise. Add the "cpath" promise to affected callers and remove it once the repository has been opened. ok tracey


avoid get_delta_chain_max_size() in dump_delta_chain_to_mem()


avoid get_delta_chain_max_size() in dump_delta_chain_to_file()


fix a bug in findwixt() which caused pack files with missing parent commits The 'nskip' variable is supposed to reflect commits which are waiting on the queue and have the 'skip' color. Only increment 'nskip' when adding such commits to the queue. Problem observed with got send -T and a tag pointing to a deleted branch. Test to reproduce the bug written by op@.


use random seeds for murmurhash2 change the three hardcoded seeds to fresh ones generated on demand via arc4random. Suggested/fixed by and ok stsp@


include header


shrink struct got_pack_meta a bit by removing the have_reused_delta flag This flag can be expressed as m->reused_delta_offset != 0 because all deltas in valid pack files will be written at a non-zero offset. We allocate a huge number of these structs during packing, so every little bit helps.


reduce the amount of memory used for caching deltas during deltification With files sorted properly for deltification we produce better deltas but end up consuming more memory and risk running into OpenBSD ulimits during packing. To compensate, reduce the threshold for the amount of delta data we store in memory, spooling more deltas into the cache file. ok op@


store a path hash instead of a verbatim path in pack meta data This reduces memory use by gotadmin pack. The goal is to sort files which share a path next to each other for deltification. A hash of the path is good enough for this purpose and consumes less memory than a verbatim copy of the path. Git does something similar. ok op@


fix paths stored in pack meta data, improving file deltification The old code was broken and stored an empty path or filenames, instead of a repository-relative path. Which means we didn't sort files for deltification as was intended. Fixing this provides much better deltas in large pack files written by gotadmin pack -a. In my test case, pack size changed from 2GB to 1.5GB. ok op@


refactor got_patch / got_worktree_patch_complete let got_patch own fileindex_path and call got_worktree_patch_complete only if got_wokrtree_patch_prepare hasn't failed. suggested by stsp@


got patch: avoid open/sync/close the fileindex over and over again Instead of flushing the fileindex after every patch in the patchfile just reuse the same fileindex and sync it only at the end of the patch operation. This speeds up 'got patch' on large repositories by quite a lot.


inline struct got_object_id in struct got_object_qid Saves us from doing a malloc/free call for every item on the list. ok op@


plug a small memleak on error in got_pack_create()


avoid malloc/free for duplicate check in got_pathlists_insert() ok op@


revert "Skip poll(2) if an imsgbuf has a non-empty read buffer" imsg_read() will call recvmsg() on the file descriptor regardless of the read buffer's state, so we should ensure that data is ready. The read buffer is used by imsg_get(), not imsg_read(). We already call imsg_get() before imsg_read(), and call the latter only if imsg_get() returns zero.


Skip poll(2) if an imsgbuf has a non-empty read buffer.


imsg_add() frees its msg argument on error; avoid double-free in error paths


batch up tree entries in imsg instead of sending one imsg per tree entry This speeds up loading of trees significantly. ok op@


parse tree entries into an array instead of a pathlist Avoids some extra malloc/free in a performance-critical path. ok op@


prevent an out-of-bounds access in got_privsep_recv_tree()