AI is being used in the GLAM sector for a wide range of tasks such as recommendation, classification, tagging, handwriting recognition, OCR, etc. While the benefits of AI for the sector are clearly visible, there are also valid concerns regarding fairness and bias. Responsible AI is a hot topic in the AI research community, resulting in a large number of fairness metrics and bias mitigation strategies. This keynote will address the specific opportunities and challenges that arise when aiming for Responsible AI in the GLAM sector. Firstly, we will discus differences in fairness goals between the GLAM sector and other sectors. Secondly, we show how the availability of data - or lack thereof - has largely determined the research directions in the field of Responsible AI. Data curation as done in GLAM may be a way forward. Finally, in the GLAM sector, bias may be engrained in collections and/or metadata. An example from the Dutch GLAM sector is the colonial perspective often found in historic collections. This type of bias is hard to address. We show how the AI and GLAM communities may collaborate to measure and mitigate this type of bias.