In a supervised learning model, the input space comprises all potential sets of values for input. It is usually larger than the output space containing all possible outputs of a model. When training a neural network, input space can be found in each layer, making it an independent entity.
To better illustrate an input space, say you have a dataset with dimensions of 8 × 8. In this case, you will have 64 points, and each point can have 16 values. The input area will have a size of 16 increased to the power of 64.
Covering a substantial portion of the input space is necessary to ensure your machine learning model learns correctly by continuously training it with large amounts of data. However, the task gets increasingly complex with more dimensions in your input space as the required amount of training data grows exponentially.
The feature space is a collection of features used to define a given set of data. While some identify it as the same as input space, it is different because it doesn’t include all possible inputs, only those with characteristics you’ve indicated.
The size of a machine learning model’s feature space is the number of its dimensions, so if it has binary dimensions, it will be a two-dimensional space that contains your variables.
Building the right machine learning model means working with large amounts of data to train it better, and it all starts with your input space. Manage your data at scale with help from Pachyderm. Sign up for a free trial to see how it can ease data management so you can focus on building models.
« Back to Glossary Index