rankeval.visualization package

Submodules

rankeval.visualization.effectiveness module

This package provides visualizations for several effectiveness analysis focused on assessing the performance of the models in terms of accuracy.

rankeval.visualization.effectiveness.init_plot_style()[source]

Initialize plot style for RankEval visualization utilities. Returns ——-

rankeval.visualization.effectiveness.is_log_scale_matrix(matrix)[source]

This method receives in input a matrix created as performance.sel(dataset=X, model=Y) with li and lj as axes.

In case the first values is at least 2 times bigger than the second values, we return True and the matrix will be rescaled in plot_rank_confusion_matrix by applying log2; otherwise we return False and nothing happens.

matrix : xarray
created as performance.sel(dataset=X, model=Y) with li and lj as axes
: bool
True or False
rankeval.visualization.effectiveness.plot_document_graded_relevance(performance)[source]

This method plots the results obtained from the document_graded_relevance analysis.

performance: xarray
The xarray obtained after computing document_graded_relevance.
fig_list : list
The list of figures.
rankeval.visualization.effectiveness.plot_model_performance(performance, compare='models', show_values=False)[source]

This method plots the results obtained from the model_performance analysis.

performance: xarray
The xarray obtained after computing model_performance.
compare: string
The compare parameter indicates what elements to compare between each other. Accepted values are ‘models’ or ‘metrics’.
show_values: bool
If show values is True, we add numeric labels on each bar in the plot with the rounded value to which the bar corresponds. The default is False and shows no values on the bars.
fig_list : list
The list of figures.
rankeval.visualization.effectiveness.plot_query_class_performance(performance, show_values=False, compare='models')[source]

This method plots the results obtained from the query_class_performance analysis.

performance: xarray
The xarray obtained after computing query_class_performance.
compare: string
The compare parameter indicates what elements to compare between each other. Accepted values are ‘models’ or ‘metrics’.
show_values: bool
If show values is True, we add numeric labels on each bar in the plot with the rounded value to which the bar corresponds. The default is False and shows no values on the bars.
fig_list : list
The list of figures.
rankeval.visualization.effectiveness.plot_query_wise_performance(performance, compare='models')[source]

This method plots the results obtained from the query_wise_performance analysis.

performance: xarray
The xarray obtained after computing query_wise_performance.
compare: string
The compare parameter indicates what elements to compare between each other. Accepted values are ‘models’ or ‘metrics’.
fig_list : list
The list of figures.
rankeval.visualization.effectiveness.plot_rank_confusion_matrix(performance)[source]

This method plots the results obtained from the rank_confusion_matrix analysis.

performance: xarray
The xarray obtained after computing rank_confusion_matrix.
fig_list : list
The list of figures.
rankeval.visualization.effectiveness.plot_tree_wise_average_contribution(performance)[source]

This method plots the results obtained from the tree_wise_average_contribution analysis.

performance: xarray
The xarray obtained after computing tree_wise_average_contribution.
fig_list : list
The list of figures.
rankeval.visualization.effectiveness.plot_tree_wise_performance(performance, compare='models')[source]

This method plots the results obtained from the tree_wise_performance analysis.

performance: xarray
The xarray obtained after computing tree_wise_performance.
compare: string
The compare parameter indicates what elements to compare between each other. The default is ‘models’. Accepted values are ‘models’ or ‘metrics’ or ‘datasets’.
fig_list : list
The list of figures.
rankeval.visualization.effectiveness.resolvexticks(performance)[source]

This methods subsamples xticks uniformly when too many xticks on x axes. It is called by plot_tree_wise_performance, when the number of trees (xticks) is too large to be nicely displayed.

performance : xarray
The
xticks : list
The list of indeces for xticks.
xticks_labels : list
The corresponding labels for each xtick.

rankeval.visualization.feature module

This package provides support for feature analysis visualizations.

rankeval.visualization.feature.align_y_axis(ax1, ax2, minresax1, minresax2, num_ticks=7)[source]

Sets tick marks of twinx axes to line up with num_ticks total tick marks

ax1 and ax2 are matplotlib axes Spacing between tick marks will be a factor of minresax1 and minresax2

rankeval.visualization.feature.plot_feature_importance(feature_perf, max_features=10, sort_by='gain', feature_names=None)[source]

Shows the most important features as a bar plot.

feature_perf : xarray.DataArray
Feature importance stats of the model to be visualized
max_features : int or None
Maximul number of features to be visualized. If None is passed, it will show all the features
sort_by : ‘gain’ or ‘count’
The method to use for selecting the top features to display. ‘gain’ method selects the top features by importance, ‘count’ selects the top features by usage (i.e., number of times it has been used by a split node).
feature_names : list of string
The name of the features to use for plotting. If None, their index is used in place of the name (starting from 1).
: matplotlib.figure.Figure
The matpotlib Figure

rankeval.visualization.topological module

This package provides support for topological analysis visualizations.

rankeval.visualization.topological.plot_shape(topological, max_level=10)[source]

Shows the average tree shape as a bullseye plot.

topological : TopologicalAnalysisResult
Topological stats of the model to be visualized.
max_level : int
Maximul tree-depth of the visualization. Maximum allowed value is 16.
: matplotlib.figure.Figure
The matpotlib Figure