First, Finn Eggers (Koivisto Author), for helping getting Berserk setup on the Koivisto Trainer. I'd like to personally thank two community members during this release. Self play results vs Berserk 8.5.1, self play results do not reflect exactly when playing vs other engines. Fix hashmove reference in code by in #414.Adjust LMR to use a new formula + legal moves. Improve NNUE auto-vectorization and update logic by in #408.Don't reduce more for previous PV moves by in #407.Reduce more in NMP when there are no opponent threats by in #405.Accumulator update via Move by in #404.Separate promotions in tactical history on captures by in #403.Return to 512 Hidden Neurons by in #402.Remove Counter History Pruning by in #396.Reverse Futility Pruning Margin Tweak by in #393.Corrent singular search results by in #392.Disable late move pruning at root + improve ordering by in #390.Split Good and Bad captures with history and SEE by in #467.Utilize LMR Depth when move pruning quiets by in #465.Add WDL Output and Normalize CP to 50% at 100cp by in #463.Identify upcoming repetitions by in #462.Add PV TT Eval correction in QSearch by in #461.No TT eval correction in QSearch by in #459.Increase hidden layer to 768 neurons by in #458.Additional continuation histoy for follow-follow up by in #454.Remove SF Chess960 Cornered Bishop Logic by in #450.History values fit within int16_t by in #449.Add simple protections for search explosions due to extensions by in #448.Move Type in Continuation History by in #447.Track bestmove using "Root Moves" by in #446.Train with a mix of Berserk and Koivisto data by in #445.Utilize 128bit multiplication for TT indexing by in #440.QSearch ordering strictly based on Capture History by in #439.Prevent positive score differences from impacting TM by in #435.Update Github Workflow to force Ubuntu 20.04 by in #425. ![]()
0 Comments
Leave a Reply. |