Skip to content

Commit

Permalink
Search tuning at very long time control
Browse files Browse the repository at this point in the history
This patch is a result of tuning session of approximately 100k games at 120+1.2.
Biggest changes are in extensions, stat bonus and depth reduction for nodes without a tt move.

Failed STC:
https://tests.stockfishchess.org/tests/view/63f72c72e74a12625bcd7938
LLR: -2.94 (-2.94,2.94) <0.00,2.00>
Total: 13872 W: 3535 L: 3769 D: 6568
Ptnml(0-2): 56, 1621, 3800, 1419, 40

Close to neutral at LTC:
https://tests.stockfishchess.org/tests/view/63f738f5e74a12625bcd7b8a
Elo: 0.80 +-1.2 (95%) LOS: 90.0%
Total: 60000 W: 16213 L: 16074 D: 27713
Ptnml(0-2): 24, 5718, 18379, 5853, 26
nElo: 1.82 +-2.8 (95%) PairsRatio: 1.02

Passed 180+1.8 VLTC:
https://tests.stockfishchess.org/tests/view/63f868f3e74a12625bcdb33e
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 15864 W: 4449 L: 4202 D: 7213
Ptnml(0-2): 1, 1301, 5083, 1544, 3

Passed 60+0.6 8 threads SMP VLTC:
https://tests.stockfishchess.org/tests/view/63f8a5d6e74a12625bcdbdb3
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 6288 W: 1821 L: 1604 D: 2863
Ptnml(0-2): 0, 402, 2123, 619, 0

closes #4406

bench 4705194
  • Loading branch information
Vizvezdenec authored and vondele committed Feb 24, 2023
1 parent 29b5ad5 commit 472e726
Showing 1 changed file with 51 additions and 51 deletions.
102 changes: 51 additions & 51 deletions src/search.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -63,15 +63,15 @@ namespace {

// Futility margin
Value futility_margin(Depth d, bool improving) {
return Value(158 * (d - improving));
return Value(154 * (d - improving));
}

// Reductions lookup table, initialized at startup
int Reductions[MAX_MOVES]; // [depth or moveNumber]

Depth reduction(bool i, Depth d, int mn, Value delta, Value rootDelta) {
int r = Reductions[d] * Reductions[mn];
return (r + 1460 - int(delta) * 1024 / int(rootDelta)) / 1024 + (!i && r > 937);
return (r + 1449 - int(delta) * 1032 / int(rootDelta)) / 1024 + (!i && r > 941);
}

constexpr int futility_move_count(bool improving, Depth depth) {
Expand All @@ -81,7 +81,7 @@ namespace {

// History and stats update bonus, based on depth
int stat_bonus(Depth d) {
return std::min(350 * d - 400, 1650);
return std::min(340 * d - 470, 1855);
}

// Add a small random component to draw evaluations to avoid 3-fold blindness
Expand Down Expand Up @@ -161,7 +161,7 @@ namespace {
void Search::init() {

for (int i = 1; i < MAX_MOVES; ++i)
Reductions[i] = int((20.26 + std::log(Threads.size()) / 2) * std::log(i));
Reductions[i] = int((19.47 + std::log(Threads.size()) / 2) * std::log(i));
}


Expand Down Expand Up @@ -354,12 +354,12 @@ void Thread::search() {
if (rootDepth >= 4)
{
Value prev = rootMoves[pvIdx].averageScore;
delta = Value(10) + int(prev) * prev / 15400;
delta = Value(10) + int(prev) * prev / 16502;
alpha = std::max(prev - delta,-VALUE_INFINITE);
beta = std::min(prev + delta, VALUE_INFINITE);

// Adjust optimism based on root move's previousScore
int opt = 116 * prev / (std::abs(prev) + 170);
int opt = 120 * prev / (std::abs(prev) + 161);
optimism[ us] = Value(opt);
optimism[~us] = -optimism[us];
}
Expand Down Expand Up @@ -462,16 +462,16 @@ void Thread::search() {
&& !Threads.stop
&& !mainThread->stopOnPonderhit)
{
double fallingEval = (71 + 12 * (mainThread->bestPreviousAverageScore - bestValue)
+ 6 * (mainThread->iterValue[iterIdx] - bestValue)) / 656.7;
double fallingEval = (69 + 13 * (mainThread->bestPreviousAverageScore - bestValue)
+ 6 * (mainThread->iterValue[iterIdx] - bestValue)) / 619.6;
fallingEval = std::clamp(fallingEval, 0.5, 1.5);

// If the bestMove is stable over several iterations, reduce time accordingly
timeReduction = lastBestMoveDepth + 9 < completedDepth ? 1.37 : 0.65;
double reduction = (1.4 + mainThread->previousTimeReduction) / (2.15 * timeReduction);
double bestMoveInstability = 1 + 1.7 * totBestMoveChanges / Threads.size();
timeReduction = lastBestMoveDepth + 8 < completedDepth ? 1.57 : 0.65;
double reduction = (1.4 + mainThread->previousTimeReduction) / (2.08 * timeReduction);
double bestMoveInstability = 1 + 1.8 * totBestMoveChanges / Threads.size();
int complexity = mainThread->complexityAverage.value();
double complexPosition = std::min(1.0 + (complexity - 261) / 1738.7, 1.5);
double complexPosition = std::min(1.03 + (complexity - 241) / 1552.0, 1.45);

double totalTime = Time.optimum() * fallingEval * reduction * bestMoveInstability * complexPosition;

Expand All @@ -491,7 +491,7 @@ void Thread::search() {
Threads.stop = true;
}
else if ( !mainThread->ponder
&& Time.elapsed() > totalTime * 0.53)
&& Time.elapsed() > totalTime * 0.50)
Threads.increaseDepth = false;
else
Threads.increaseDepth = true;
Expand Down Expand Up @@ -760,7 +760,7 @@ namespace {
// Use static evaluation difference to improve quiet move ordering (~4 Elo)
if (is_ok((ss-1)->currentMove) && !(ss-1)->inCheck && !priorCapture)
{
int bonus = std::clamp(-19 * int((ss-1)->staticEval + ss->staticEval), -1940, 1940);
int bonus = std::clamp(-19 * int((ss-1)->staticEval + ss->staticEval), -1920, 1920);
thisThread->mainHistory[~us][from_to((ss-1)->currentMove)] << bonus;
}

Expand All @@ -770,13 +770,13 @@ namespace {
// margin and the improving flag are used in various pruning heuristics.
improvement = (ss-2)->staticEval != VALUE_NONE ? ss->staticEval - (ss-2)->staticEval
: (ss-4)->staticEval != VALUE_NONE ? ss->staticEval - (ss-4)->staticEval
: 172;
: 156;
improving = improvement > 0;

// Step 7. Razoring (~1 Elo).
// If eval is really low check with qsearch if it can exceed alpha, if it can't,
// return a fail low.
if (eval < alpha - 394 - 255 * depth * depth)
if (eval < alpha - 426 - 252 * depth * depth)
{
value = qsearch<NonPV>(pos, ss, alpha - 1, alpha);
if (value < alpha)
Expand All @@ -786,27 +786,27 @@ namespace {
// Step 8. Futility pruning: child node (~40 Elo).
// The depth condition is important for mate finding.
if ( !ss->ttPv
&& depth < 8
&& eval - futility_margin(depth, improving) - (ss-1)->statScore / 304 >= beta
&& depth < 9
&& eval - futility_margin(depth, improving) - (ss-1)->statScore / 280 >= beta
&& eval >= beta
&& eval < 28580) // larger than VALUE_KNOWN_WIN, but smaller than TB wins
&& eval < 25128) // larger than VALUE_KNOWN_WIN, but smaller than TB wins
return eval;

// Step 9. Null move search with verification search (~35 Elo)
if ( !PvNode
&& (ss-1)->currentMove != MOVE_NULL
&& (ss-1)->statScore < 18200
&& (ss-1)->statScore < 18755
&& eval >= beta
&& eval >= ss->staticEval
&& ss->staticEval >= beta - 20 * depth - improvement / 14 + 235 + complexity / 24
&& ss->staticEval >= beta - 19 * depth - improvement / 13 + 253 + complexity / 25
&& !excludedMove
&& pos.non_pawn_material(us)
&& (ss->ply >= thisThread->nmpMinPly || us != thisThread->nmpColor))
{
assert(eval - beta >= 0);

// Null move dynamic reduction based on depth, eval and complexity of position
Depth R = std::min(int(eval - beta) / 165, 6) + depth / 3 + 4 - (complexity > 800);
Depth R = std::min(int(eval - beta) / 168, 6) + depth / 3 + 4 - (complexity > 825);

ss->currentMove = MOVE_NULL;
ss->continuationHistory = &thisThread->continuationHistory[0][0][NO_PIECE][0];
Expand Down Expand Up @@ -842,7 +842,7 @@ namespace {
}
}

probCutBeta = beta + 180 - 54 * improving;
probCutBeta = beta + 186 - 54 * improving;

// Step 10. ProbCut (~10 Elo)
// If we have a good enough capture and a reduced search returns a value
Expand Down Expand Up @@ -904,14 +904,14 @@ namespace {
return qsearch<PV>(pos, ss, alpha, beta);

if ( cutNode
&& depth >= 9
&& depth >= 7
&& !ttMove)
depth -= 2;

moves_loop: // When in check, search starts here

// Step 12. A small Probcut idea, when we are in check (~4 Elo)
probCutBeta = beta + 402;
probCutBeta = beta + 391;
if ( ss->inCheck
&& !PvNode
&& depth >= 2
Expand Down Expand Up @@ -1006,14 +1006,14 @@ namespace {
// Futility pruning for captures (~2 Elo)
if ( !givesCheck
&& !PvNode
&& lmrDepth < 7
&& lmrDepth < 6
&& !ss->inCheck
&& ss->staticEval + 185 + 203 * lmrDepth + PieceValue[EG][pos.piece_on(to_sq(move))]
+ captureHistory[movedPiece][to_sq(move)][type_of(pos.piece_on(to_sq(move)))] / 6 < alpha)
&& ss->staticEval + 182 + 230 * lmrDepth + PieceValue[EG][pos.piece_on(to_sq(move))]
+ captureHistory[movedPiece][to_sq(move)][type_of(pos.piece_on(to_sq(move)))] / 7 < alpha)
continue;

// SEE based pruning (~11 Elo)
if (!pos.see_ge(move, Value(-220) * depth))
if (!pos.see_ge(move, Value(-206) * depth))
continue;
}
else
Expand All @@ -1024,24 +1024,24 @@ namespace {

// Continuation history based pruning (~2 Elo)
if ( lmrDepth < 5
&& history < -4180 * (depth - 1))
&& history < -4405 * (depth - 1))
continue;

history += 2 * thisThread->mainHistory[us][from_to(move)];

lmrDepth += history / 7208;
lmrDepth += history / 7278;
lmrDepth = std::max(lmrDepth, -2);

// Futility pruning: parent node (~13 Elo)
if ( !ss->inCheck
&& lmrDepth < 13
&& ss->staticEval + 103 + 136 * lmrDepth <= alpha)
&& ss->staticEval + 103 + 138 * lmrDepth <= alpha)
continue;

lmrDepth = std::max(lmrDepth, 0);

// Prune moves with negative SEE (~4 Elo)
if (!pos.see_ge(move, Value(-25 * lmrDepth * lmrDepth - 16 * lmrDepth)))
if (!pos.see_ge(move, Value(-24 * lmrDepth * lmrDepth - 15 * lmrDepth)))
continue;
}
}
Expand All @@ -1056,15 +1056,15 @@ namespace {
// a reduced search on all the other moves but the ttMove and if the
// result is lower than ttValue minus a margin, then we will extend the ttMove.
if ( !rootNode
&& depth >= 4 - (thisThread->completedDepth > 22) + 2 * (PvNode && tte->is_pv())
&& depth >= 4 - (thisThread->completedDepth > 21) + 2 * (PvNode && tte->is_pv())
&& move == ttMove
&& !excludedMove // Avoid recursive singular search
/* && ttValue != VALUE_NONE Already implicit in the next condition */
&& abs(ttValue) < VALUE_KNOWN_WIN
&& (tte->bound() & BOUND_LOWER)
&& tte->depth() >= depth - 3)
{
Value singularBeta = ttValue - (3 + (ss->ttPv && !PvNode)) * depth;
Value singularBeta = ttValue - (2 + (ss->ttPv && !PvNode)) * depth;
Depth singularDepth = (depth - 1) / 2;

ss->excludedMove = move;
Expand All @@ -1083,7 +1083,7 @@ namespace {
&& ss->doubleExtensions <= 10)
{
extension = 2;
depth += depth < 12;
depth += depth < 13;
}
}

Expand All @@ -1106,15 +1106,15 @@ namespace {

// Check extensions (~1 Elo)
else if ( givesCheck
&& depth > 9
&& abs(ss->staticEval) > 78)
&& depth > 10
&& abs(ss->staticEval) > 88)
extension = 1;

// Quiet ttMove extensions (~1 Elo)
else if ( PvNode
&& move == ttMove
&& move == ss->killers[0]
&& (*contHist[0])[movedPiece][to_sq(move)] >= 5600)
&& (*contHist[0])[movedPiece][to_sq(move)] >= 5705)
extension = 1;
}

Expand Down Expand Up @@ -1155,7 +1155,7 @@ namespace {

// Decrease reduction for PvNodes based on depth
if (PvNode)
r -= 1 + 11 / (3 + depth);
r -= 1 + 12 / (3 + depth);

// Decrease reduction if ttMove has been singularly extended (~1 Elo)
if (singularQuietLMR)
Expand All @@ -1172,17 +1172,17 @@ namespace {

// Decrease reduction if move is a killer and we have a good history
if (move == ss->killers[0]
&& (*contHist[0])[movedPiece][to_sq(move)] >= 3600)
&& (*contHist[0])[movedPiece][to_sq(move)] >= 3722)
r--;

ss->statScore = 2 * thisThread->mainHistory[us][from_to(move)]
+ (*contHist[0])[movedPiece][to_sq(move)]
+ (*contHist[1])[movedPiece][to_sq(move)]
+ (*contHist[3])[movedPiece][to_sq(move)]
- 4467;
- 4182;

// Decrease/increase reduction for moves with a good/bad history (~30 Elo)
r -= ss->statScore / (12800 + 4410 * (depth > 7 && depth < 19));
r -= ss->statScore / (11791 + 3992 * (depth > 6 && depth < 19));

// Step 17. Late moves reduction / extension (LMR, ~117 Elo)
// We use various heuristics for the sons of a node after the first son has
Expand All @@ -1206,8 +1206,8 @@ namespace {
{
// Adjust full depth search based on LMR results - if result
// was good enough search deeper, if it was bad enough search shallower
const bool doDeeperSearch = value > (alpha + 66 + 11 * (newDepth - d));
const bool doEvenDeeperSearch = value > alpha + 582 && ss->doubleExtensions <= 5;
const bool doDeeperSearch = value > (alpha + 58 + 12 * (newDepth - d));
const bool doEvenDeeperSearch = value > alpha + 588 && ss->doubleExtensions <= 5;
const bool doShallowerSearch = value < bestValue + newDepth;

ss->doubleExtensions = ss->doubleExtensions + doEvenDeeperSearch;
Expand Down Expand Up @@ -1318,8 +1318,8 @@ namespace {
// Reduce other moves if we have found at least one score improvement
if ( depth > 1
&& depth < 6
&& beta < VALUE_KNOWN_WIN
&& alpha > -VALUE_KNOWN_WIN)
&& beta < 10534
&& alpha > -10534)
depth -= 1;

assert(depth > 0);
Expand Down Expand Up @@ -1374,7 +1374,7 @@ namespace {
else if (!priorCapture)
{
// Extra bonuses for PV/Cut nodes or bad fail lows
int bonus = (depth > 4) + (PvNode || cutNode) + (bestValue < alpha - 88 * depth);
int bonus = (depth > 5) + (PvNode || cutNode) + (bestValue < alpha - 97 * depth);
update_continuation_histories(ss-1, pos.piece_on(prevSq), prevSq, stat_bonus(depth) * bonus);
}

Expand Down Expand Up @@ -1502,7 +1502,7 @@ namespace {
if (PvNode && bestValue > alpha)
alpha = bestValue;

futilityBase = bestValue + 158;
futilityBase = bestValue + 168;
}

const PieceToHistory* contHist[] = { (ss-1)->continuationHistory, (ss-2)->continuationHistory,
Expand Down Expand Up @@ -1575,7 +1575,7 @@ namespace {
continue;

// Do not search moves with bad enough SEE values (~5 Elo)
if (!pos.see_ge(move, Value(-108)))
if (!pos.see_ge(move, Value(-110)))
continue;

}
Expand Down Expand Up @@ -1708,7 +1708,7 @@ namespace {

if (!pos.capture(bestMove))
{
int bonus2 = bestValue > beta + 146 ? bonus1 // larger bonus
int bonus2 = bestValue > beta + 153 ? bonus1 // larger bonus
: stat_bonus(depth); // smaller bonus

// Increase stats for the best move in case it was a quiet move
Expand Down

1 comment on commit 472e726

@dubslow
Copy link
Contributor

@dubslow dubslow commented on 472e726 Mar 1, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since singular extensions have proven to be perhaps the most wildly weirdly scaling part of the search, I investigated how much of this weird scaling can be blamed on the singularBeta change. The answer is "most", and may or may not be "all".

At STC, most or all of the loss can be blamed on the sB change:
https://tests.stockfishchess.org/tests/view/63f916d0e74a12625bcdd320

At LTC, the sB change appears to be neutral:
https://tests.stockfishchess.org/tests/view/63fb45efe74a12625bce3f3a

At VLTC, the sB change appears to explain most, if perhaps not all, of the gain:
https://tests.stockfishchess.org/tests/live_elo/63fb4624e74a12625bce3f48

A couple of attempts to use completedDepth to scale the transition from 3 to 2, akin to the singular depth condition, failed to gain at LTC:
cD > 26: https://tests.stockfishchess.org/tests/view/63fd7b7de74a12625bcea5e3
cD > 22: https://tests.stockfishchess.org/tests/view/63f933bbe74a12625bcdd8fb
cD > 18: https://tests.stockfishchess.org/tests/view/63ff3e3de74a12625bcefd18

I have no idea how much of the crazy SMP-VLTC scaling is due to sB. It's probably way too expensive to be worth investigating.

Please sign in to comment.