Avoiding the Forecasting Pitfalls: 10 Red Flags in Hiring Data Scientists
Navigate Through Your Next Data Science Interview Successfully by Adhering to These Key Forecasting Rules
Hiring competent data scientists, analysts, and machine learning engineers for forecasting roles is crucial, but many candidates sabotage themselves in interviews by making amateur mistakes.
Here are the top 10 time series forecasting flubs that raise serious red flags for any technically competent hiring manager. Avoid these common pitfalls to ace your next data science interview!
❌ Not doing EDA
❌ Not checking if the time series is stationary
❌ Not checking if time series have gaps
❌ Not selecting the correct forecasting metric
❌ Not reading the task correctly and instead doing something else
❌ Not using simple benchmark
❌ Jumping into LSTM or deep learning in general when there are not many data points
❌ Not been able to explain why selected this or that model
❌ Not understanding the model used and what’s under the hood
❌ Thinking that Facebook Prophet is the best algorithm, but not been able to explain its main drawbacks or why it is a terrible model
Mistake 1 — Not Doing EDA: Always explore and visualise the time series data BEFORE modelling. Skipping exploratory data analysis misses crucial insights into trends, seasonality, anomalies, etc.
In data science, diving headfirst into modelling without first understanding the data at hand is akin to navigating uncharted waters without a map. This analogy holds particularly true in time series forecasting, a domain where the temporal arrangement of data points is pivotal.
The first and indispensable step before any modelling endeavour should be Exploratory Data Analysis (EDA).
Unveiling the Data’s Story:
Time series data often comes with its narrative, encapsulated in its trends, seasonality, and irregularities.
EDA is the flashlight that illuminates this narrative. By visualising the data, you unveil the underlying patterns and behaviours that are insightful and imperative for any subsequent modelling.
1. Trends: Observing long-term increases or decreases in the data is crucial. Understanding trends can provide insights into the data’s general direction over time.
2. Seasonality: Seasonal patterns are recurring behaviours that occur predictably over time. Identifying seasonality helps in understanding and modelling the cyclical nature of the data.
3. Anomalies: Unusual spikes, dips, or outliers in the data can provide critical insights. They could signify errors, extraordinary events, or crucial structural changes in the underlying system.
Fine-Tuning the Modeling Approach:
The insights derived from EDA can significantly influence the choice of models and the preprocessing steps needed. For instance, knowing the seasonality can guide the selection of seasonal decomposition models, while understanding trends might necessitate detrending the data.
1. Stationarity Check: Time series data often requires the data to be stationary, i.e., the statistical properties like mean and variance remain constant over time. EDA can help identify if stationarity is an issue and address it before modelling.
2. Handling Missing Values and Gaps: Through EDA, missing values or gaps in the data can be identified, which might need imputation or other handling before model fitting.
3. Selecting Appropriate Lag Variables: One can identify significant lags in time series modelling by exploring autocorrelations and partial autocorrelations.
A Prerequisite for Effective Communication:
EDA paves the way for effective modelling and facilitates better communication with stakeholders.
Visualisations created during EDA provide a clear, intuitive way to convey the data’s story, challenges, and the rationale behind modelling choices to non-technical stakeholders.
Skipping EDA is a dangerous shortcut that may lead to inappropriate modelling, misinterpretation of results, and, ultimately, misguided decisions.
It’s akin to ignoring your dashboard's light. You might still drive blind to potential issues leading to further breakdown.
Hence, the mantra for robust time series forecasting should always be: explore, visualise, and then model.
Mistake 2 — Not Checking Stationarity: Failing to test for stationarity and ignoring non-stationary issues like trends and seasonality leads to unreliable models. Always check for stationarity and difference/detrend if needed.
Time series data is often characterised by its evolution, which manifests as trends, seasonal patterns, and other temporal structures.
Stationarity, a property where the statistical attributes of the data, such as mean and variance, remain unchanged over time, is a cornerstone assumption in many time series modelling techniques. The overlook of stationarity checking can lead to models that are not only inaccurate but fundamentally flawed.
Understanding Stationarity:
- Definition: A time series is said to be stationary if its statistical properties do not change over time. It should look roughly the same at any time point or over any time interval.
- Importance: Many time series forecasting models rely on the assumption of stationarity. The reason is that stationary processes are easier to predict as they bounce around a constant mean.
Checking for Stationarity:
- Visual Inspection: Plotting the data can sometimes reveal apparent trends or seasonal patterns. However, visual inspection is subjective and not always reliable.
- Statistical Tests:
- Augmented Dickey-Fuller (ADF) Test: A standard statistical test for checking stationarity. A low p-value (typically ≤ 0.05) indicates stationarity.
- Kwiatkowski-Phillips-Schmidt-Shin (KPSS) Test: Another test where a low p-value indicates non-stationarity.
Addressing Non-Stationarity:
- Detrending: Removing the underlying trend in the series, which could be a constant or deterministic trend.
- Differencing: Taking the difference between consecutive observations to remove trends and seasonality. This is often a very effective way to transform a non-stationary series into a stationary one.
- Seasonal Decomposition: Decomposing the time series into seasonal, trend, and residual components. Models like STL (Seasonal and Trend decomposition using Loess) and MSTL are helpful for this.
- Transformation: Sometimes, a mathematical transformation, like taking the log or square root of the series, can help in stabilising the variance and achieving stationarity.
Ensuring Reliable Models:
Ignoring stationarity can lead to spurious results and models with little predictive power. For example, a model might appear to have a good fit with high R-squared values, but in reality, it’s capturing spurious relationships due to non-stationarity.
Checking for and addressing stationarity is a non-negotiable step in time series analysis. Failing to do so not only undermines the reliability and accuracy of the models but can also lead to misleading interpretations and misguided decisions. Hence, before modelling, always check for stationarity and, when appropriate, apply necessary transformations to handle non-stationary issues like trends and seasonality.
Mistake 3 — Ignoring Gaps in Data:
Gaps in historical data impact forecasting. Imputing missing values or modelling intermittency is required for reliable forecasts. Ignoring gaps demonstrates a lack of care.
In most problems, time series data is expected to have a consistent time interval between observations. However, data often comes with missing values or gaps in real-world scenarios.
Ignoring these gaps in time series data before jumping into forecasting is a cardinal misstep that can lead to inaccurate predictions and misguided decisions. Addressing these gaps diligently is essential for creating robust and reliable forecasting models.
Identifying the Issue:
1. Visual Inspection: Plotting the time series can clearly show where the data is missing or irregular.
2. Automated Checks: Implementing automated checks to identify missing timestamps or irregular intervals can help systematically find data gaps.
Understanding the Impact:
1. Inaccuracy: Gaps can lead to models misinterpreting the data trend and seasonal patterns, resulting in inaccurate forecasts.
2. Bias: Missing values might occur non-randomly, leading to bias in the analysis if not appropriately handled.
Strategies for Handling Gaps:
1. Imputation:
— Mean Imputation: Filling the gaps by the mean value of the surrounding data.
— Interpolation: Linear or more complex interpolation methods can estimate the missing values based on the neighbouring data points.
— Last Observation Carried Forward (LOCF) or Next Observation Carried Backward (NOCB): Using the last known value to fill the forward or the next known value to fill the backward gaps.
Modelling Intermittency:
— Zero-Inflated Models: If zeros represent the gaps, zero-inflated models might be a good approach to model both the presence/absence of a value and the magnitude of the value.
— Intermittent models: models explicitly created to handle intermittent data (see, for example, Nixtla’s statsforecast https://nixtla.github.io/statsforecast/)
External Information:
— Incorporating external information or additional variables that could explain the missingness and help fill the gaps.
Change of Frequency:
— Sometimes, changing the frequency of the data (e.g., from daily to weekly) can help mitigate the gaps. Several models designed specifically for handling intermittent data employ this strategy.
Sign of Diligence
Addressing gaps in the data is a sign of thoroughness and an understanding of the nuances involved in time series forecasting. Ignoring gaps, on the other hand, demonstrates a lack of care and can significantly undermine trust in the forecasting model.
Time series forecasting is not merely about applying sophisticated models but understanding and preparing the data to ensure the models can do their job correctly. Ignoring gaps in the data is a glaring oversight that can easily be avoided with diligence and the proper techniques to handle missing values or model intermittency.
Mistake 4 — Using Wrong Evaluation Metric: Employing unsuitable error metrics such as MAPE unveils a fundamental misunderstanding of forecasting principles.
The choice of evaluation metric should not only align with the inherent properties of the data but should ideally stem from the realm of proper scoring rules. Additionally, the metric should facilitate a meaningful comparison against a benchmark and ensure accurate aggregation over time, among other considerations.
The evaluation metric chosen to assess a forecasting model’s performance is a fundamental aspect of the model-building and selection process.
Utilising inappropriate metrics can lead to a misunderstanding of the model’s accuracy and, potentially, the selection of a suboptimal model.
In time series forecasting, it’s crucial to employ metrics that accurately reflect the model’s ability to predict unseen data points across time.
Understanding the Implications:
1. Misleading Interpretations:
— Employing metrics not tailored for time series data can yield misleading interpretations of a model’s performance, particularly when trends and seasonalities are present.
Inadequate Assessment:
— The wrong metric may not capture the true forecasting capabilities of the model, thereby failing to provide a clear understanding of how the model will perform on new data.
Some Metrics for Forecasting:
1. Mean Absolute Error (MAE):
— MAE measures the average magnitude of the errors between predicted and observed values without considering their direction. It’s easy to interpret and provides a linear error penalty. The downside is that it is not a relative metric, so aggregating MAE across time series requires extra steps.
2. Root Mean Square Error (RMSE):
— RMSE also measures the magnitude of error between predicted and observed values but gives more weight to more significant errors due to its quadratic penalty. This makes RMSE sensitive to outliers and might be a suitable choice when significant errors are undesirable.
However, it’s crucial to note that metrics such as RMSE should not be discarded outright in many scenarios.
Especially in regression problems and time series forecasting, quadratic error metrics like RMSE are often categorised as proper scoring rules and are pivotal in guiding the model to provide honest forecasts under the data distribution.
Delving into Proper Scoring Rules:
Proper scoring rules are a class of evaluation metrics designed to elicit truthful predictive distributions. In other words, they reward models for the accuracy of their probabilistic forecasts.
Significance:
A proper scoring rule is a performance metric that incentivises the forecast model to make honest predictions reflecting the data distribution. A proper scoring rule helps produce probabilistically calibrated forecasts consistent with the data distribution.
RMSE belongs to the broader class of quadratic scores, which can be shown to be strictly proper for continuous variables. Significant mistakes are penalised more heavily by squaring the errors, encouraging conservative predictions when uncertainty is high.
In contrast, improper scoring rules like absolute error are less strongly penalised for wildly inaccurate predictions. This can lead to models that ignore risk and produce arbitrarily narrow confidence intervals.
RMSE aligns model objectives with accurate uncertainty quantification — wider prediction intervals are made when warranted by the data. It avoids rewarding blind overconfidence. This leads to well-calibrated forecast distributions keyed to the underlying variability.
For these reasons, RMSE is theoretically motivated as a statistically sound metric for evaluating probabilistic predictions on regression tasks with continuously-valued targets. It incentivises honest forecasts reflective of the data characteristics and uncertainty.
In the seminal paper by Gneiting ly Proper Scoring Rules, Prediction, and Estimation’, you can read more on the subject.
3. Root Mean Squared Scaled Error (RMSSE):
— RMSSE is a variation that scales the RMSE by the standard deviation of the series, providing a normalised error metric. This can be useful for comparing forecasting performance across different series and scales. RMSSE is a quadratic error and also a proper scoring rule.
The choice of evaluation metric is not a one-size-fits-all scenario. It requires a clear understanding of the data properties and the forecasting objectives.
Employing the wrong metric can be a grave mistake that misguides the model selection process, leading to unreliable forecasts. By choosing a suitable error metric aligned with the data properties and forecasting goals, one can significantly enhance the model selection process and improve the reliability and accuracy of the forecasts.
Mistake 5 — Not Following Instructions: Blindly applying favoured models without carefully reading the requirements violates the interview premise. Follow instructions precisely.
In data science, particularly in an interview setting, the ability to meticulously follow instructions is as crucial as having a deep understanding of algorithms and models. When tasked with a forecasting problem, it’s not merely about applying a favoured or sophisticated model but aligning the approach with the specified requirements and constraints of the task.
Interview exercises assess specific skills and knowledge relevant to the role.
Mindlessly applying favoured forecasting models without reading the requirements closely violates this premise.
For example, the instructions may ask you to focus on uncertainty quantification. However, applying point forecasting methods and ignoring prediction intervals demonstrates a disregard for the core objective.
Or the exercise may require explaining modelling choices and showing work. But skipping straight to final forecasts misses the goal of evaluating the thought process.
Carefully reading the instructions provides insights into what competencies the interviewer aims to assess. Follow them precisely to demonstrate you can understand client needs and tailor analysis accordingly.
Understanding the Importance:
1. Adherence to Guidelines:
— The instructions in forecasting pass important guidelines and constraints critical to arriving at a viable solution. Ignoring these instructions can lead to erroneous outcomes and signify a lack of attention to detail.
2. Reflecting Comprehension:
— Following instructions accurately reflect a candidate’s ability to comprehend and adhere to specified requirements, which is pivotal in real-world data science projects.
Common Oversights:
1. Overlooking Model Constraints:
— Sometimes, instructions might specify the types of models or techniques to be employed or avoided. Overlooking such directives and mindlessly applying favoured models demonstrates a disregard for project specifications.
2. Ignoring Data Preprocessing Instructions:
— Instructions might include crucial steps for data preprocessing, which, if ignored, can significantly impact the model’s performance.
3. Bypassing Evaluation Metrics:
— The instructions often specify the evaluation metrics for assessing the model’s performance. Using different metrics may lead to incorrect assessments and comparisons.
Strategies for Adherence:
1. Thorough Review:
— Before diving into modelling, thoroughly review the instructions to understand the task requirements, constraints, and evaluation criteria.
2. Checklist Creation:
— Create a checklist based on the instructions to ensure all requirements are met during the model building and evaluation process.
3. Continuous Reference:
— Continuously refer to the instructions while working through the task to ensure alignment with the specified guidelines.
Demonstrating Professionalism and Competence:
1. Precision and Attention:
— Precise adherence to instructions reflects a professional approach and signifies competence in handling real-world projects where instructions and guidelines are paramount.
2. Effective Communication:
— If any part of the instructions is unclear, seeking clarification demonstrates proactive communication and a desire to fulfil the task requirements accurately.
Adhering to instructions must be balanced where specific guidelines align with business or project objectives. Ignoring this critical aspect violates the interview premise and could be a red flag to potential employers regarding a candidate’s suitability for data-driven roles.
Mistake 6 — No Simple Benchmark:
Establish simple statistical benchmarks as a baseline for performance comparisons.
Here’s an expanded explanation of why establishing simple forecasting benchmarks is essential:
Before deploying sophisticated models, first establish basic statistical benchmarks to contextualise performance. Simple methods like seasonal naïve models provide inexpensive baselines.
For example, for hourly electricity demand forecasting, a reasonable benchmark is predicting each hour’s demand will be the same as that hour’s average historical demand. This accounts for daily and weekly seasonal cycles.
While simple, this benchmark is surprisingly tricky to beat for some series. Unless your sophisticated model can demonstrate outperforming simple business benchmarks, the model does not have any business value.
Suppose a complex deep learning model cannot surpass the naive seasonal approach; such a model does not have any business value.
Benchmarking demonstrates you understand the core repetitive patterns in the data before layering on advanced techniques. It also quantifies the incremental lift more complex methods provide.
As George Box said, “All models are wrong, but some are useful.” No model will be perfect, but benchmarks help determine if added complexity is warranted for the improvement.
Most importantly, benchmarks prevent overclaiming by calibrating expectations. Any limitations of a proposed model become clear when contrasted with basic statistical approaches.
In forecasting, domain expertise is knowing when not to apply sophisticated methods because the underlying signal is simple.
Benchmarks prevent overengineering and anchor to realistic performance targets.
In forecasting, it’s tempting to jump straight into sophisticated models to achieve high accuracy. However, neglecting to establish a simple statistical benchmark as a baseline for performance comparisons is a significant oversight. A simple example serves as a reference point against which the performance of more complex models can be evaluated, ensuring that the added complexity is warranted and beneficial.
Understanding the Significance:
1. Baseline Performance:
— A simple benchmark provides a baseline performance metric that any sophisticated model should surpass to be considered valuable. It sets a minimal expectation of performance.
2. Validation of Complexity:
— By comparing the performance of complex models to a simple benchmark, it’s easier to validate whether the added complexity is yielding a significant improvement in forecasting accuracy.
3. Insight into Data Characteristics:
— Simple benchmarks can provide insights into the inherent predictability of the time-series data, helping to set realistic expectations regarding the level of accuracy that can be achieved.
Common Simple Benchmarks:
1. Naïve Forecast:
— A naïve forecast, where each forecasted value is set to the last observed value, is a standard simple benchmark in time series forecasting.
2. Seasonal Naïve:
— In data with clear seasonality, a seasonal naïve forecast, which uses the value from the same season in the previous cycle as the forecast, can be a helpful benchmark.
Establishing and Utilizing Benchmarks:
1. Performance Measurement:
— Measure the performance of the simple benchmark using the same error metric that will be used to evaluate the sophisticated models. This ensures a fair comparison.
2. Documentation:
— Document the performance of the simple benchmark and use it as a reference throughout the model development and evaluation process.
3. Communication:
— Communicate the benchmark performance to stakeholders to set realistic expectations regarding what can be achieved with more complex modelling approaches.
Encouraging a Rigorous Approach:
1. Objective Evaluation:
— A simple benchmark encourages a more objective evaluation of different models and promotes a rigorous approach to model selection.
2. Focus on Value Addition:
— It focuses on value addition rather than complexity, encouraging the selection of models that significantly improve over the simple benchmark.
Establishing a simple statistical benchmark is a fundamental step in forecasting that fosters a disciplined and objective approach to model evaluation and selection. It ensures that the pursuit of complex models is grounded in achieving significant performance improvements, promoting a value-driven rather than a complexity-driven approach to forecasting.
Mistake 7 — Overcomplicating Models: Defaulting complicated models like LSTMs shows no understanding of what works in forecasting (see my article ‘What Truly Works in Time Series Forecasting — The Results from Nixtla’s Mega Study’ for details) and parsimony principles. Start simple.
A common mistake is defaulting to complex black-box models like neural networks, even when data is limited. This shows a lack of understanding and often leads to severe overfitting.
Forecasting requires understanding parsimony — start with the simplest model that captures the core signal, then incrementally add complexity only as needed.
For example, a basic seasonal ARIMA model may perform very well on a small retail sales dataset. But they are jumping straight to a multivariate LSTM with hundreds of parameters that would likely just fit noise.
The LSTM’s flexibility leads it to model spurious patterns that are not statistically robust. This results in degraded performance on new data.
With limited data, complex models are poorly constrained. Simpler linear models and statistical benchmarks should be established first. The onus is on justifying greater complexity.
This avoids the common anti-pattern of applying a needlessly intricate deep learning model as a hammer for every nail. Advanced models can obscure overfitting, whereas linear methods directly show instability.
The interview assesses your ability to match model complexity to the data size and properties. Overengineering violates principles of forecasting parsimony and invites overfitting risks. Start simple.
Defaulting to black-box algorithms on small data demonstrates a lack of practical judgment — a core competency in applied forecasting roles.
Grasping the Principle of Parsimony:
1. Simplicity and Interpretability:
— Simpler models are often easier to understand and interpret, making them preferable in scenarios where model interpretability is crucial for decision-making or stakeholder communication.
2. Avoidance of Overfitting:
— Simple models have fewer parameters and are less likely to overfit the noise in the data, especially when the dataset is small.
3. Computational Efficiency:
— Simpler models are usually more computationally efficient, requiring less time and resources to train and evaluate.
The Perils of Overcomplication:
1. Overfitting:
— Complex models like LSTMs have a high capacity and can easily fit the noise in the data rather than the underlying pattern, especially when the data is scarce.
2. Interpretability Loss:
— Black-box models provide no insight into the relationships within the data, which can be a significant drawback in many practical situations.
3. Increased Computational Demand:
— The computational demands for training and tuning complex models can be substantial, often requiring specialised hardware and long training times.
Adopting a Gradual Complexity Approach:
1. Start with Simplicity:
— Begin with simpler models like linear regression or basic time series models like ARIMA to understand the baseline performance and the data's inherent structure.
2. Incremental Complexity:
— If simple models fail to capture the underlying patterns in the data, gradually increase the model complexity by moving to more sophisticated models like Random Forests, Boosted Trees (CatBoost/XGBoost/LightGBM), or neural networks.
3. Justify Complexity:
— Only resort to highly complex models like LSTMs if there’s a clear justification, such as a substantial improvement in forecast accuracy that simpler models cannot achieve.
4. Regularization and Validation:
— Employ regularisation techniques and rigorous validation to guard against overfitting, especially when using complex models.
Ensuring Model Suitability:
1. Match Model to Data Size:
— Ensure that the chosen model is suitable for the size and complexity of the data. Complex models may perform poorly on small datasets.
2. Understand Data Characteristics:
— Understand the data’s characteristics and requirements and choose models well-suited to these properties.
Overcomplicating models in forecasting tasks, particularly without justification or understanding of the data at hand, can lead to many issues, including overfitting, loss of interpretability, and wasted computational resources.
Adhering to the principle of parsimony by starting simple and only escalating to more complex models when warranted can lead to more robust, interpretable, and efficient forecasting solutions.
Mistake 8 — Can’t Justify Modeling Choices: All modelling decisions should be justified based on data characteristics and evaluation. Lack of sound reasoning raises competence concerns.
Forecasting requires making many decisions — data preprocessing, feature selection, model family, hyperparameters, etc. Every choice should be grounded in the data context and evaluation process.
For example, log transforming the time series or choosing an RNN architecture needs justification.
Being able to rationally explain choices demonstrates deep understanding versus mindlessly trying modelling combinations. It builds credibility in the recommendation.
Interviewers want to assess your ability to think critically and make analytically sound choices, not just arrive at an answer. The reasoning behind decisions reveals true competency.
If faced with questions about selections, responding with superficial explanations like “it worked better” or “this model is known to be accurate” reveals a lack of substance.
Instead, cite relevant characteristics identified in EDA, model evaluation metrics, cross-validation, statistical tests, etc. Show the choice aligns with the data evidence.
Making arbitrary or poorly justified modelling decisions raises analytical rigour and substance concerns. Always be prepared to explain the rationale behind choices in a precise, data-driven manner.
The ability to justify modelling is a hallmark of competence and understanding in data science and forecasting. Each decision made during the modelling process should stem from a sound rationale rooted in the characteristics of the data and the evaluation of model performance. The absence of clear justification for modelling choices undermines the model's reliability and raises concerns regarding the competence and thoroughness of the individual or team responsible for the model.
Significance of Justifiable Modeling Choices:
1. Informed Decision-Making:
— Making informed decisions based on data characteristics and preliminary analysis ensures that the chosen model is well-suited to capture the underlying patterns in the data.
2. Avoidance of Arbitrary Choices:
— Arbitrarily choosing models or parameters can lead to suboptimal performance and indicate a lack of understanding or thoroughness.
3. Transparency and Trust:
— Being able to justify modelling choices promotes transparency and builds trust with stakeholders by demonstrating a systematic and reasoned approach to forecasting.
Evaluating and Justifying Modeling Choices:
1. Exploratory Data Analysis (EDA):
— Conducting thorough EDA to understand the data’s characteristics is the first step towards making informed modelling decisions.
2. Understanding Model Assumptions:
— Different models come with different assumptions. Ensuring that the chosen model’s assumptions align with the data’s characteristics is crucial.
3. Performance Evaluation:
— Evaluate the performance of different models using appropriate metrics and validation techniques to identify the most suitable model for the task at hand.
4. Benchmarking:
— Comparing the performance of the chosen model against more straightforward benchmarks or alternative models provides a basis for justifying the modelling choices.
5. Interpretability vs. Accuracy Trade-off:
— Sometimes, there’s a trade-off between model interpretability and accuracy. Being able to justify the chosen point on this trade-off spectrum based on the project requirements is essential.
Demonstrating Competence through Justification:
1. Articulating Reasoning:
— Articulating the reasoning behind modelling choices demonstrates a deep understanding of the modelling process and the data.
2. Documenting Decisions:
— Documenting the decision-making process, including evaluating alternative models and the rationale for the chosen model, promotes transparency and allows for constructive feedback.
3. Engaging in Discussions:
— Being open to discussions regarding modelling choices and willing to consider alternative approaches based on reasoned arguments is a sign of professionalism and competence.
The ability to justify modelling choices is an essential aspect of the forecasting process that reflects the competence and thoroughness of a data scientist. It promotes an informed, transparent, and systematic approach to forecasting, which is crucial for building reliable models and fostering stakeholder trust. Without a sound justification for modelling decisions, the forecasting process can appear arbitrary, undermining the credibility of the model and the individuals or teams responsible for it.
Mistake 9 — Don’t Understand Own Model: Using complex models like Facebook Prophet without comprehending strengths, weaknesses, and assumptions is technically weak. Know your tools!
Here’s an expanded explanation of why failing to understand your models is problematic:
Many data scientists use automated libraries like Facebook Prophet as black-box tools without understanding why they do not work.
This approach is technically weak and risky, especially considering that Facebook Prophet is generally a terrible forecasting model unsuitable for most forecasting tasks.
The fact that a data scientist used “pip install prophet” in most situations means that this data scientist does not understand how forecasting works and is just guided by hype or superficial and weak low-quality materials easily found online.
Such data scientists are a risk for any business as they lack an understanding of why specific forecasting models don’t work. They are a liability for data science teams and companies employing them, as poorly built forecasting solutions tend to later melt in production, exposing companies to expensive fiascos very quickly.
Before applying any advanced model, you should comprehend its assumptions, appropriate use cases, and limitations. Viewing it as a magical forecasting box indicates a need for more expertise.
For example, Prophet makes specific assumptions about trends, seasonality, and holidays in the data. Are these suitable for your problem context and domain? How does the model extrapolate? More importantly, Prophet is flawed by design as it does not include autoregressive features critical for modelling local time series patterns.
Using a complex model without the ability to explain its mechanisms, strengths vs weaknesses, and critical hyperparameters demonstrates shallow understanding. Forecasting requires deep knowledge of tools.
The interviewer wants to assess your degree of technical depth — the ability to tightly couple domain concepts with models. Black-box usage indicates reliance on software rather than expertise.
Truly skilled data scientists have conceptual solid knowledge of models to make informed selections and understand limitations. Using complex tools as empty off-the-shelf packages raises competency concerns. Know your models!
Utilising complex models without thoroughly understanding their underlying principles, strengths, and weaknesses is a technical misstep.
Models like Facebook Prophet, while popular, come with certain assumptions and characteristics that, if ignored or misunderstood, can lead to inaccurate forecasts or misinterpretations. Knowing your tools inside out is a fundamental requirement in forecasting and data science at large.
Delving into Model Understanding:
- Model Assumptions:
Every model comes with inherent assumptions. For instance, Prophet ignores local patterns as it does not have autoregressive terms and is therefore unsuitable predominant majority of general forecasting tasks. Not understanding or checking these assumptions against the data can lead to suboptimal or erroneous forecasts.
Strengths and Weaknesses:
Understanding the strengths and weaknesses of a model helps in choosing the right tool for the task.
Parameter Tuning:
Many models have hyperparameters that need tuning. Without understanding the model, tuning these hyperparameters becomes a shot in the dark, often leading to poor performance.
The Importance of Knowing Your Tools:
Informed Decision Making:
Knowing the ins and outs of your models enables informed decision-making, ensuring that the model aligns well with the data and the forecasting objectives.
Effective Communication:
Understanding your model aids in effective communication with stakeholders, as you can explain the model’s choices, predictions, and uncertainties clearly and concisely.
Problem-Solving:
When issues arise, understanding the model’s workings is crucial for troubleshooting and refining the model to improve its performance.
Pathways to Model Understanding:
Educational Resources:
- Delve into books, online courses, and documentation to learn about your models.
Practical Experimentation:
Hands-on experimentation with the model, using different data types and settings, can provide valuable insights into its behaviour and limitations.
Community Engagement:
Engaging with the community, attending workshops, and discussing with peers are great ways to deepen your understanding and stay updated with the latest best practices.
Maintaining a Balance:
Complexity vs. Understanding:
It’s essential to balance the complexity of the models you use and your understanding of them. Overly complex models that are not well-understood can lead to “black-box” solutions, which can be risky and unreliable.
Continuous Learning:
Data science is ever-evolving, and continuous learning is critical to keeping up with new models and techniques.
Utilising complex models like Facebook Prophet without a thorough understanding is a technical weakness that can undermine the forecasting process. It’s imperative to know your tools well, comprehend your models' assumptions, strengths, and weaknesses, and be prepared to justify and explain your modelling choices. This enhances your forecasts' reliability and accuracy, fostering trust and effective stakeholder communication.
Mistake 10 — Overreliance on Software: Software proficiency is helpful, but real expertise requires deep conceptual forecasting knowledge. Don’t let tools substitute for thinking.
While fluency in data science tools like Python and R is essential, true forecasting expertise requires deep conceptual knowledge that software cannot replace.
For example, properly pre-processing time series data involves understanding techniques like differencing, detrending, imputation, etc. No software library substitutes for knowing which methods to apply based on data properties.
Expertise means mathematically understanding model assumptions and mechanics, not just calling APIs. As George Box said, “All models are wrong, but some are useful” — so their appropriateness must be critically examined.
During interviews, simply demonstrating fancy software usage is insufficient. The evaluator wants to see you can think through problems from first principles and make analytically sound choices.
More reliance on tools risks erroneous application of improper models and pre-processing for the data context. Software should enhance human insight, not attempt to automate it away.
While languages like Python greatly aid forecasting work, they are means, not ends. The skilled forecaster relies on deep conceptual knowledge to drive analysis, supported by software capabilities where applicable.
Professional forecasting requires a solid theoretical foundation in time series concepts, statistics, and modelling. An overemphasis on software usage at the expense of thinking critically suggests questionable analytical maturity.
In the digital age, software tools have become indispensable aids in forecasting. They offer a myriad of functionalities that simplify data analysis, model building, and result visualisation. However, the convenience and power of software can sometimes lead to an overreliance that may overshadow the fundamental necessity for deep conceptual forecasting knowledge. Here’s an expanded exploration of this mistake:
The Pitfall of Overreliance:
Automation Complacency:
— Many modern software tools have automated model selection and evaluation features. While these features are helpful, overreliance on them can lead to complacency, where the forecaster may overlook the need to understand the underlying assumptions and mechanics of the models being used.
Black-Box Syndrome:
— Overreliance on software can lead to a “black-box” syndrome, where models are treated as opaque entities that churn out forecasts without understanding or interpretation. This can be dangerous as it may result in misleading conclusions or unnoticed errors.
Misinterpretation and Miscommunication:
— Without a deep understanding, there’s a higher risk of misinterpreting the output provided by the software. This misinterpretation can extend to stakeholder communication, leading to potentially costly misunderstandings.
Nurturing Deep Conceptual Forecasting Knowledge:
Education and Training:
— Investing time in education and training to understand the core principles of forecasting, the assumptions behind different models, and the statistical theories that underpin forecasting methodologies is crucial.
Hands-On Experimentation:
— Beyond theoretical knowledge, hands-on experimentation with building models from scratch, testing different forecasting methodologies, and manually evaluating model performance can provide invaluable practical insights.
Engagement with the Forecasting Community:
— Engaging with the forecasting community, attending workshops, and reading widely on contemporary forecasting challenges and solutions can help deepen understanding and keep one updated on best practices.
Balancing Software Proficiency with Conceptual Understanding:
Software as a Tool, Not a Crutch:
— Treat software as a tool to aid in implementing forecasting methodologies, not as a crutch that replaces the need for understanding.
Customisation and Troubleshooting:
— A deep understanding allows for customising software tools to fit specific forecasting needs and effective troubleshooting when issues arise.
Informed Decision-Making:
— With a solid conceptual grounding, forecasters can make informed decisions on model selection, parameter tuning, and evaluation, even using advanced software tools.
Software proficiency is undoubtedly a valuable asset in modern forecasting. However, it should not supplant the necessity for deep conceptual forecasting knowledge. The ability to think critically, understand the nuances of different forecasting models, and make informed decisions is paramount for producing reliable forecasts and advancing one’s expertise in forecasting. The pathway towards becoming a proficient and effective forecaster is avoiding the trap of overreliance on software and nurturing a solid conceptual understanding.
Conclusion: Avoid these ten common forecasting faux pas in your data science interview to demonstrate true competency and land the job!
References.
- What Truly Works in Time Series Forecasting — The Results from Nixtla’s Mega Study.
- Strictly Proper Scoring Rules, Prediction, and Estimation (the key paper on proper scoring rules).
- Facebook Prophet falls out of favour.
- Why are people bashing Facebook Prophet.
- Facebook Prophet, Covid and why I don’t trust the Prophet.
- Is Facebook’s “Prophet” the Time-Series Messiah, or Just a Vert Naughty Boy?