Novelty search has shown benefits in different fields such as evolutionary computing, classical AI planning, and deep reinforcement learning. Searching for novelty instead of, or in addition to, directly maximizing the search objective, aims at avoiding dead ends and local minima, and overall improving exploration. We propose and test the integration of novelty into Monte Carlo Tree Search (MCTS), a popular framework for online RL planning, by linearly combining value estimates with novelty scores during the selection phase of MCTS. We adapt four different novelty measures from the literature (evaluation novelty, state-pseudocounts, feature-pseudocounts, and frequency-thresholding), integrate them into MCTS, and test them in six board games (Connect4, Othello, Breakthrough, Knightthrough, AtariGo and Gomoku). Experiments show improvements for MCTS in a wide range of settings, covering both guidance by handcoded heuristics and neural networks. The results demonstrate potential for these optimistic novelty estimates to achieve online generalisation of uncertainty during search.