We consider minimizing nonsmooth convex functions with bounded subgradients. However, instead of directly observing a subgradient at every step, the optimizer receives an adversarially corrupted subgradient. The adversary's power is limited to a finite corruption budget, with which the adversary can strategically time their perturbations. We show that the averaged subgradient descent method, which is optimal in the noiseless case, has worst-case performance that deteriorates quadratically with the corruption budget. Using performance optimization programming, (i) we construct and analyze the performance of three novel subgradient descent methods, and (ii) propose a novel lower bound on the worst-case suboptimality gap of any first-order method satisfying a mild cone condition proposed by [5]. The worst-case performance of each of our methods degrades only linearly with the corruption budget. Furthermore, we prove that all three proposed subgradient descent methods are nearoptimal and suffer a worst-case suboptimality gap which asymptotically matches our lower bound. Our subgradient descent methods achieve near-optimal performance without needing momentum nor averaging. This suggests that these techniques are not necessary in this context, which is in line with recent results by [9].