TF-IDF, which is a short form of Term Frequency - Inverse Document Frequency, is a relevance calculation that can be used to assess how relevant a document is to the given query.
For the keywords in the query, the calculation seeks to reflect each keyword's importance to the document being assessed, relative to how often that keyword appears across the entire corpus.
As the name suggests, the calculation is split into 2 components.
Term Frequency measures how often the term under consideration appears in the document currently being scored. Basically the more a term appears in a document, the more important it is.
This alone doesn't tell the whole story, as common keywords such as the or and are extremely common but don't add a lot of value.
The other part of the calculation is Inverse Document Frequency, which tells us how rare a keyword is across the whole document corpus.
The main idea behind the calculation is that terms which are frequent in a document, but rare across all documents in the corpus are of high relevance value.
There are multiple variants of the calculation's components that have various properties, which are detailed on the TF-IDF Wikipedia page, but a simple form is:
TF * log(N/DF)
where:
TF = count of current term in this document
DF = count of the number of documents containing this term
N = total number of documents in the corpus
This algorithm does have it's draw backs. Since the core of the algorithm is based upon term frequency, it's possible to game the algorithm by spamming keywords within a document. Similarly shorter documents are penalized simply due to the fact that they contain fewer keywords.
While this calculation isn't used directly in a lot of modern search engines, it's important to understand as a baseline introduction as it does underpin some of the more advanced algorithms. For example, BM25 is one such algorithm that attempts to address some of the above issues by smoothing out the effects of repeated keywords as well as accounting better for short documents.