Note

You are viewing the documentation for an older version of boto (boto2).

Boto3, the next version of Boto, is now stable and recommended for general use. It can be used side-by-side with Boto in the same project, so it is easy to start using Boto3 in your existing projects as well as new projects. Going forward, API updates and all new feature work will be focused on Boto3.

For more information, see the documentation for boto3.

Machine Learning

boto.machinelearning

boto.machinelearning.connect_to_region(region_name, **kw_params)
boto.machinelearning.regions()

Get all available regions for the Amazon Machine Learning.

Return type:list
Returns:A list of boto.regioninfo.RegionInfo

boto.machinelearning.layer1

class boto.machinelearning.layer1.MachineLearningConnection(**kwargs)

Definition of the public APIs exposed by Amazon Machine Learning

APIVersion = '2014-12-12'
AuthServiceName = 'machinelearning'
DefaultRegionEndpoint = 'machinelearning.us-east-1.amazonaws.com'
DefaultRegionName = 'us-east-1'
ResponseError

alias of boto.exception.JSONResponseError

ServiceName = 'MachineLearning'
TargetPrefix = 'AmazonML_20141212'
create_batch_prediction(batch_prediction_id, ml_model_id, batch_prediction_data_source_id, output_uri, batch_prediction_name=None)

Generates predictions for a group of observations. The observations to process exist in one or more data files referenced by a DataSource. This operation creates a new BatchPrediction, and uses an MLModel and the data files referenced by the DataSource as information sources.

CreateBatchPrediction is an asynchronous operation. In response to CreateBatchPrediction, Amazon Machine Learning (Amazon ML) immediately returns and sets the BatchPrediction status to PENDING. After the BatchPrediction completes, Amazon ML sets the status to COMPLETED.

You can poll for status updates by using the GetBatchPrediction operation and checking the Status parameter of the result. After the COMPLETED status appears, the results are available in the location specified by the OutputUri parameter.

Parameters:
  • batch_prediction_id (string) – A user-supplied ID that uniquely identifies the BatchPrediction.
  • batch_prediction_name (string) – A user-supplied name or description of the BatchPrediction. BatchPredictionName can only use the UTF-8 character set.
  • ml_model_id (string) – The ID of the MLModel that will generate predictions for the group of observations.
  • batch_prediction_data_source_id (string) – The ID of the DataSource that points to the group of observations to predict.
  • output_uri (string) – The location of an Amazon Simple Storage Service (Amazon S3) bucket or directory to store the batch prediction results. The following substrings are not allowed in the s3 key portion of the “outputURI” field: ‘:’, ‘//’, ‘/./’, ‘/../’.
Amazon ML needs permissions to store and retrieve the logs on your
behalf. For information about how to set permissions, see the `Amazon Machine Learning Developer Guide`_.
create_data_source_from_rds(data_source_id, rds_data, role_arn, data_source_name=None, compute_statistics=None)

Creates a DataSource object from an ` Amazon Relational Database Service`_ (Amazon RDS). A DataSource references data that can be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

CreateDataSourceFromRDS is an asynchronous operation. In response to CreateDataSourceFromRDS, Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING. After the DataSource is created and ready for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in COMPLETED or PENDING status can only be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response.

Parameters:
  • data_source_id (string) – A user-supplied ID that uniquely identifies the DataSource. Typically, an Amazon Resource Number (ARN) becomes the ID for a DataSource.
  • data_source_name (string) – A user-supplied name or description of the DataSource.
  • rds_data (dict) –

The data specification of an Amazon RDS DataSource:

  • DatabaseInformation -

    • `DatabaseName ` - Name of the Amazon RDS database.
    • ` InstanceIdentifier ` - Unique identifier for the Amazon RDS
      database instance.
  • DatabaseCredentials - AWS Identity and Access Management (IAM)

    credentials that are used to connect to the Amazon RDS database.

  • ResourceRole - Role (DataPipelineDefaultResourceRole) assumed by an

    Amazon Elastic Compute Cloud (EC2) instance to carry out the copy task from Amazon RDS to Amazon S3. For more information, see `Role templates`_ for data pipelines.

  • ServiceRole - Role (DataPipelineDefaultRole) assumed by the AWS Data

    Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon Simple Storage Service (S3). For more information, see `Role templates`_ for data pipelines.

  • SecurityInfo - Security information to use to access an Amazon RDS

    instance. You need to set up appropriate ingress rules for the security entity IDs provided to allow access to the Amazon RDS instance. Specify a [ SubnetId, SecurityGroupIds] pair for a VPC-based Amazon RDS instance.

  • SelectSqlQuery - Query that is used to retrieve the observation data

    for the Datasource.

  • S3StagingLocation - Amazon S3 location for staging RDS data. The data

    retrieved from Amazon RDS using SelectSqlQuery is stored in this location.

  • DataSchemaUri - Amazon S3 location of the DataSchema.

  • DataSchema - A JSON string representing the schema. This is not

    required if DataSchemaUri is specified.

  • DataRearrangement - A JSON string representing the splitting

    requirement of a Datasource. Sample - ` “{“randomSeed”:”some- random-seed”, “splitting”:{“percentBegin”:10,”percentEnd”:60}}”`

Parameters:
  • role_arn (string) – The role that Amazon ML assumes on behalf of the user to create and activate a data pipeline in the users account and copy data (using the SelectSqlQuery) query from Amazon RDS to Amazon S3.
  • compute_statistics (boolean) – The compute statistics for a DataSource. The statistics are generated from the observation data referenced by a DataSource. Amazon ML uses the statistics internally during an MLModel training. This parameter must be set to True if the ``DataSource `` needs to be used for MLModel training.
create_data_source_from_redshift(data_source_id, data_spec, role_arn, data_source_name=None, compute_statistics=None)

Creates a DataSource from `Amazon Redshift`_. A DataSource references data that can be used to perform either CreateMLModel, CreateEvaluation or CreateBatchPrediction operations.

CreateDataSourceFromRedshift is an asynchronous operation. In response to CreateDataSourceFromRedshift, Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING. After the DataSource is created and ready for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in COMPLETED or PENDING status can only be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response.

The observations should exist in the database hosted on an Amazon Redshift cluster and should be specified by a SelectSqlQuery. Amazon ML executes ` Unload`_ command in Amazon Redshift to transfer the result set of SelectSqlQuery to S3StagingLocation.

After the DataSource is created, it’s ready for use in evaluations and batch predictions. If you plan to use the DataSource to train an MLModel, the DataSource requires another item – a recipe. A recipe describes the observation variables that participate in training an MLModel. A recipe describes how each input variable will be used in training. Will the variable be included or excluded from training? Will the variable be manipulated, for example, combined with another variable or split apart into word combinations? The recipe provides answers to these questions. For more information, see the Amazon Machine Learning Developer Guide.

Parameters:
  • data_source_id (string) – A user-supplied ID that uniquely identifies the DataSource.
  • data_source_name (string) – A user-supplied name or description of the DataSource.
  • data_spec (dict) –

The data specification of an Amazon Redshift DataSource:

  • DatabaseInformation -

    • `DatabaseName ` - Name of the Amazon Redshift database.
    • ` ClusterIdentifier ` - Unique ID for the Amazon Redshift cluster.
  • DatabaseCredentials - AWS Identity abd Access Management (IAM)

    credentials that are used to connect to the Amazon Redshift database.

  • SelectSqlQuery - Query that is used to retrieve the observation data

    for the Datasource.

  • S3StagingLocation - Amazon Simple Storage Service (Amazon S3)

    location for staging Amazon Redshift data. The data retrieved from Amazon Relational Database Service (Amazon RDS) using SelectSqlQuery is stored in this location.

  • DataSchemaUri - Amazon S3 location of the DataSchema.

  • DataSchema - A JSON string representing the schema. This is not

    required if DataSchemaUri is specified.

  • DataRearrangement - A JSON string representing the splitting

    requirement of a Datasource. Sample - ` “{“randomSeed”:”some- random-seed”, “splitting”:{“percentBegin”:10,”percentEnd”:60}}”`

Parameters:role_arn (string) – A fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role on behalf of the user to create the following:
  • A security group to allow Amazon ML to execute the SelectSqlQuery
    query on an Amazon Redshift cluster
  • An Amazon S3 bucket policy to grant Amazon ML read/write permissions
    on the S3StagingLocation
Parameters:compute_statistics (boolean) – The compute statistics for a DataSource. The statistics are generated from the observation data referenced by a DataSource. Amazon ML uses the statistics internally during MLModel training. This parameter must be set to True if the ``DataSource `` needs to be used for MLModel training
create_data_source_from_s3(data_source_id, data_spec, data_source_name=None, compute_statistics=None)

Creates a DataSource object. A DataSource references data that can be used to perform CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

CreateDataSourceFromS3 is an asynchronous operation. In response to CreateDataSourceFromS3, Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING. After the DataSource is created and ready for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in COMPLETED or PENDING status can only be used to perform CreateMLModel, CreateEvaluation or CreateBatchPrediction operations.

If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response.

The observation data used in a DataSource should be ready to use; that is, it should have a consistent structure, and missing data values should be kept to a minimum. The observation data must reside in one or more CSV files in an Amazon Simple Storage Service (Amazon S3) bucket, along with a schema that describes the data items by name and type. The same schema must be used for all of the data files referenced by the DataSource.

After the DataSource has been created, it’s ready to use in evaluations and batch predictions. If you plan to use the DataSource to train an MLModel, the DataSource requires another item: a recipe. A recipe describes the observation variables that participate in training an MLModel. A recipe describes how each input variable will be used in training. Will the variable be included or excluded from training? Will the variable be manipulated, for example, combined with another variable, or split apart into word combinations? The recipe provides answers to these questions. For more information, see the `Amazon Machine Learning Developer Guide`_.

Parameters:
  • data_source_id (string) – A user-supplied identifier that uniquely identifies the DataSource.
  • data_source_name (string) – A user-supplied name or description of the DataSource.
  • data_spec (dict) –

The data specification of a DataSource:

  • DataLocationS3 - Amazon Simple Storage Service (Amazon S3) location
    of the observation data.
  • DataSchemaLocationS3 - Amazon S3 location of the DataSchema.
  • DataSchema - A JSON string representing the schema. This is not
    required if DataSchemaUri is specified.
  • DataRearrangement - A JSON string representing the splitting
    requirement of a Datasource. Sample - ` “{“randomSeed”:”some- random-seed”, “splitting”:{“percentBegin”:10,”percentEnd”:60}}”`
Parameters:compute_statistics (boolean) – The compute statistics for a DataSource. The statistics are generated from the observation data referenced by a DataSource. Amazon ML uses the statistics internally during an MLModel training. This parameter must be set to True if the ``DataSource `` needs to be used for MLModel training
create_evaluation(evaluation_id, ml_model_id, evaluation_data_source_id, evaluation_name=None)

Creates a new Evaluation of an MLModel. An MLModel is evaluated on a set of observations associated to a DataSource. Like a DataSource for an MLModel, the DataSource for an Evaluation contains values for the Target Variable. The Evaluation compares the predicted result for each observation to the actual outcome and provides a summary so that you know how effective the MLModel functions on the test data. Evaluation generates a relevant performance metric such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore based on the corresponding MLModelType: BINARY, REGRESSION or MULTICLASS.

CreateEvaluation is an asynchronous operation. In response to CreateEvaluation, Amazon Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING. After the Evaluation is created and ready for use, Amazon ML sets the status to COMPLETED.

You can use the GetEvaluation operation to check progress of the evaluation during the creation operation.

Parameters:
  • evaluation_id (string) – A user-supplied ID that uniquely identifies the Evaluation.
  • evaluation_name (string) – A user-supplied name or description of the Evaluation.
  • ml_model_id (string) – The ID of the MLModel to evaluate.
The schema used in creating the MLModel must match the schema of the
DataSource used in the Evaluation.
Parameters:evaluation_data_source_id (string) – The ID of the DataSource for the evaluation. The schema of the DataSource must match the schema used to create the MLModel.
create_ml_model(ml_model_id, ml_model_type, training_data_source_id, ml_model_name=None, parameters=None, recipe=None, recipe_uri=None)

Creates a new MLModel using the data files and the recipe as information sources.

An MLModel is nearly immutable. Users can only update the MLModelName and the ScoreThreshold in an MLModel without creating a new MLModel.

CreateMLModel is an asynchronous operation. In response to CreateMLModel, Amazon Machine Learning (Amazon ML) immediately returns and sets the MLModel status to PENDING. After the MLModel is created and ready for use, Amazon ML sets the status to COMPLETED.

You can use the GetMLModel operation to check progress of the MLModel during the creation operation.

CreateMLModel requires a DataSource with computed statistics, which can be created by setting ComputeStatistics to True in CreateDataSourceFromRDS, CreateDataSourceFromS3, or CreateDataSourceFromRedshift operations.

Parameters:
  • ml_model_id (string) – A user-supplied ID that uniquely identifies the MLModel.
  • ml_model_name (string) – A user-supplied name or description of the MLModel.
  • ml_model_type (string) – The category of supervised learning that this MLModel will address. Choose from the following types:
  • Choose REGRESSION if the MLModel will be used to predict a
    numeric value.
  • Choose BINARY if the MLModel result has two possible values.
  • Choose MULTICLASS if the MLModel result has a limited number of
    values.
For more information, see the `Amazon Machine Learning Developer
Guide`_.
Parameters:parameters (map) –
A list of the training parameters in the MLModel. The list is
implemented as a map of key/value pairs.

The following is the current set of training parameters:

  • sgd.l1RegularizationAmount - Coefficient regularization L1 norm. It
    controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in sparse feature set. If you use this parameter, start by specifying a small value such as 1.0E-08. The value is a double that ranges from 0 to MAX_DOUBLE. The default is not to use L1 normalization. The parameter cannot be used when L2 is specified. Use this parameter sparingly.
  • sgd.l2RegularizationAmount - Coefficient regularization L2 norm. It
    controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value such as 1.0E-08. The valuseis a double that ranges from 0 to MAX_DOUBLE. The default is not to use L2 normalization. This cannot be used when L1 is specified. Use this parameter sparingly.
  • sgd.maxPasses - Number of times that the training process traverses
    the observations to build the MLModel. The value is an integer that ranges from 1 to 10000. The default value is 10.
  • sgd.maxMLModelSizeInBytes - Maximum allowed size of the model.
    Depending on the input data, the size of the model might affect its performance. The value is an integer that ranges from 100000 to 2147483648. The default value is 33554432.
Parameters:
  • training_data_source_id (string) – The DataSource that points to the training data.
  • recipe (string) – The data recipe for creating MLModel. You must specify either the recipe or its URI. If you dont specify a recipe or its URI, Amazon ML creates a default.
  • recipe_uri (string) – The Amazon Simple Storage Service (Amazon S3) location and file name that contains the MLModel recipe. You must specify either the recipe or its URI. If you dont specify a recipe or its URI, Amazon ML creates a default.
create_realtime_endpoint(ml_model_id)

Creates a real-time endpoint for the MLModel. The endpoint contains the URI of the MLModel; that is, the location to send real-time prediction requests for the specified MLModel.

Parameters:ml_model_id (string) – The ID assigned to the MLModel during creation.
delete_batch_prediction(batch_prediction_id)

Assigns the DELETED status to a BatchPrediction, rendering it unusable.

After using the DeleteBatchPrediction operation, you can use the GetBatchPrediction operation to verify that the status of the BatchPrediction changed to DELETED.

The result of the DeleteBatchPrediction operation is irreversible.

Parameters:batch_prediction_id (string) – A user-supplied ID that uniquely identifies the BatchPrediction.
delete_data_source(data_source_id)

Assigns the DELETED status to a DataSource, rendering it unusable.

After using the DeleteDataSource operation, you can use the GetDataSource operation to verify that the status of the DataSource changed to DELETED.

The results of the DeleteDataSource operation are irreversible.

Parameters:data_source_id (string) – A user-supplied ID that uniquely identifies the DataSource.
delete_evaluation(evaluation_id)

Assigns the DELETED status to an Evaluation, rendering it unusable.

After invoking the DeleteEvaluation operation, you can use the GetEvaluation operation to verify that the status of the Evaluation changed to DELETED.

The results of the DeleteEvaluation operation are irreversible.

Parameters:evaluation_id (string) – A user-supplied ID that uniquely identifies the Evaluation to delete.
delete_ml_model(ml_model_id)

Assigns the DELETED status to an MLModel, rendering it unusable.

After using the DeleteMLModel operation, you can use the GetMLModel operation to verify that the status of the MLModel changed to DELETED.

The result of the DeleteMLModel operation is irreversible.

Parameters:ml_model_id (string) – A user-supplied ID that uniquely identifies the MLModel.
delete_realtime_endpoint(ml_model_id)

Deletes a real time endpoint of an MLModel.

Parameters:ml_model_id (string) – The ID assigned to the MLModel during creation.
describe_batch_predictions(filter_variable=None, eq=None, gt=None, lt=None, ge=None, le=None, ne=None, prefix=None, sort_order=None, next_token=None, limit=None)

Returns a list of BatchPrediction operations that match the search criteria in the request.

Parameters:filter_variable (string) –
Use one of the following variables to filter a list of
BatchPrediction:
  • CreatedAt - Sets the search criteria to the BatchPrediction
    creation date.
  • Status - Sets the search criteria to the BatchPrediction status.
  • Name - Sets the search criteria to the contents of the
    BatchPrediction ** ** Name.
  • IAMUser - Sets the search criteria to the user account that invoked
    the BatchPrediction creation.
  • MLModelId - Sets the search criteria to the MLModel used in the
    BatchPrediction.
  • DataSourceId - Sets the search criteria to the DataSource used in
    the BatchPrediction.
  • DataURI - Sets the search criteria to the data file(s) used in the
    BatchPrediction. The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory.
Parameters:
  • eq (string) – The equal to operator. The BatchPrediction results will have FilterVariable values that exactly match the value specified with EQ.
  • gt (string) – The greater than operator. The BatchPrediction results will have FilterVariable values that are greater than the value specified with GT.
  • lt (string) – The less than operator. The BatchPrediction results will have FilterVariable values that are less than the value specified with LT.
  • ge (string) – The greater than or equal to operator. The BatchPrediction results will have FilterVariable values that are greater than or equal to the value specified with GE.
  • le (string) – The less than or equal to operator. The BatchPrediction results will have FilterVariable values that are less than or equal to the value specified with LE.
  • ne (string) – The not equal to operator. The BatchPrediction results will have FilterVariable values not equal to the value specified with NE.
  • prefix (string) –
A string that is found at the beginning of a variable, such as Name
or Id.
For example, a Batch Prediction operation could have the Name
2014-09-09-HolidayGiftMailer. To search for this BatchPrediction, select Name for the FilterVariable and any of the following strings for the Prefix:
  • 2014-09
  • 2014-09-09
  • 2014-09-09-Holiday
Parameters:sort_order (string) – A two-value parameter that determines the sequence of the resulting list of `MLModel`s.
  • asc - Arranges the list in ascending order (A-Z, 0-9).
  • dsc - Arranges the list in descending order (Z-A, 9-0).

Results are sorted by FilterVariable.

Parameters:
  • next_token (string) – An ID of the page in the paginated results.
  • limit (integer) – The number of pages of information to include in the result. The range of acceptable values is 1 through 100. The default value is 100.
describe_data_sources(filter_variable=None, eq=None, gt=None, lt=None, ge=None, le=None, ne=None, prefix=None, sort_order=None, next_token=None, limit=None)

Returns a list of DataSource that match the search criteria in the request.

Parameters:filter_variable (string) –

Use one of the following variables to filter a list of DataSource:

  • CreatedAt - Sets the search criteria to DataSource creation
    dates.
  • Status - Sets the search criteria to DataSource statuses.
  • Name - Sets the search criteria to the contents of DataSource **
    ** Name.
  • DataUri - Sets the search criteria to the URI of data files used to
    create the DataSource. The URI can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.
  • IAMUser - Sets the search criteria to the user account that invoked
    the DataSource creation.
Parameters:
  • eq (string) – The equal to operator. The DataSource results will have FilterVariable values that exactly match the value specified with EQ.
  • gt (string) – The greater than operator. The DataSource results will have FilterVariable values that are greater than the value specified with GT.
  • lt (string) – The less than operator. The DataSource results will have FilterVariable values that are less than the value specified with LT.
  • ge (string) – The greater than or equal to operator. The DataSource results will have FilterVariable values that are greater than or equal to the value specified with GE.
  • le (string) – The less than or equal to operator. The DataSource results will have FilterVariable values that are less than or equal to the value specified with LE.
  • ne (string) – The not equal to operator. The DataSource results will have FilterVariable values not equal to the value specified with NE.
  • prefix (string) –
A string that is found at the beginning of a variable, such as Name
or Id.
For example, a DataSource could have the Name
2014-09-09-HolidayGiftMailer. To search for this DataSource, select Name for the FilterVariable and any of the following strings for the Prefix:
  • 2014-09
  • 2014-09-09
  • 2014-09-09-Holiday
Parameters:sort_order (string) – A two-value parameter that determines the sequence of the resulting list of DataSource.
  • asc - Arranges the list in ascending order (A-Z, 0-9).
  • dsc - Arranges the list in descending order (Z-A, 9-0).

Results are sorted by FilterVariable.

Parameters:
  • next_token (string) – The ID of the page in the paginated results.
  • limit (integer) – The maximum number of DataSource to include in the result.
describe_evaluations(filter_variable=None, eq=None, gt=None, lt=None, ge=None, le=None, ne=None, prefix=None, sort_order=None, next_token=None, limit=None)

Returns a list of DescribeEvaluations that match the search criteria in the request.

Parameters:filter_variable (string) –
Use one of the following variable to filter a list of Evaluation
objects:
  • CreatedAt - Sets the search criteria to the Evaluation creation
    date.
  • Status - Sets the search criteria to the Evaluation status.
  • Name - Sets the search criteria to the contents of Evaluation **
    ** Name.
  • IAMUser - Sets the search criteria to the user account that invoked
    an Evaluation.
  • MLModelId - Sets the search criteria to the MLModel that was
    evaluated.
  • DataSourceId - Sets the search criteria to the DataSource used in
    Evaluation.
  • DataUri - Sets the search criteria to the data file(s) used in
    Evaluation. The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory.
Parameters:
  • eq (string) – The equal to operator. The Evaluation results will have FilterVariable values that exactly match the value specified with EQ.
  • gt (string) – The greater than operator. The Evaluation results will have FilterVariable values that are greater than the value specified with GT.
  • lt (string) – The less than operator. The Evaluation results will have FilterVariable values that are less than the value specified with LT.
  • ge (string) – The greater than or equal to operator. The Evaluation results will have FilterVariable values that are greater than or equal to the value specified with GE.
  • le (string) – The less than or equal to operator. The Evaluation results will have FilterVariable values that are less than or equal to the value specified with LE.
  • ne (string) – The not equal to operator. The Evaluation results will have FilterVariable values not equal to the value specified with NE.
  • prefix (string) –
A string that is found at the beginning of a variable, such as Name
or Id.
For example, an Evaluation could have the Name
2014-09-09-HolidayGiftMailer. To search for this Evaluation, select Name for the FilterVariable and any of the following strings for the Prefix:
  • 2014-09
  • 2014-09-09
  • 2014-09-09-Holiday
Parameters:sort_order (string) – A two-value parameter that determines the sequence of the resulting list of Evaluation.
  • asc - Arranges the list in ascending order (A-Z, 0-9).
  • dsc - Arranges the list in descending order (Z-A, 9-0).

Results are sorted by FilterVariable.

Parameters:
  • next_token (string) – The ID of the page in the paginated results.
  • limit (integer) – The maximum number of Evaluation to include in the result.
describe_ml_models(filter_variable=None, eq=None, gt=None, lt=None, ge=None, le=None, ne=None, prefix=None, sort_order=None, next_token=None, limit=None)

Returns a list of MLModel that match the search criteria in the request.

Parameters:filter_variable (string) –

Use one of the following variables to filter a list of MLModel:

  • CreatedAt - Sets the search criteria to MLModel creation date.
  • Status - Sets the search criteria to MLModel status.
  • Name - Sets the search criteria to the contents of MLModel ** **
    Name.
  • IAMUser - Sets the search criteria to the user account that invoked
    the MLModel creation.
  • TrainingDataSourceId - Sets the search criteria to the DataSource
    used to train one or more MLModel.
  • RealtimeEndpointStatus - Sets the search criteria to the MLModel
    real-time endpoint status.
  • MLModelType - Sets the search criteria to MLModel type: binary,
    regression, or multi-class.
  • Algorithm - Sets the search criteria to the algorithm that the
    MLModel uses.
  • TrainingDataURI - Sets the search criteria to the data file(s) used
    in training a MLModel. The URL can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.
Parameters:
  • eq (string) – The equal to operator. The MLModel results will have FilterVariable values that exactly match the value specified with EQ.
  • gt (string) – The greater than operator. The MLModel results will have FilterVariable values that are greater than the value specified with GT.
  • lt (string) – The less than operator. The MLModel results will have FilterVariable values that are less than the value specified with LT.
  • ge (string) – The greater than or equal to operator. The MLModel results will have FilterVariable values that are greater than or equal to the value specified with GE.
  • le (string) – The less than or equal to operator. The MLModel results will have FilterVariable values that are less than or equal to the value specified with LE.
  • ne (string) – The not equal to operator. The MLModel results will have FilterVariable values not equal to the value specified with NE.
  • prefix (string) –
A string that is found at the beginning of a variable, such as Name
or Id.
For example, an MLModel could have the Name
2014-09-09-HolidayGiftMailer. To search for this MLModel, select Name for the FilterVariable and any of the following strings for the Prefix:
  • 2014-09
  • 2014-09-09
  • 2014-09-09-Holiday
Parameters:sort_order (string) – A two-value parameter that determines the sequence of the resulting list of MLModel.
  • asc - Arranges the list in ascending order (A-Z, 0-9).
  • dsc - Arranges the list in descending order (Z-A, 9-0).

Results are sorted by FilterVariable.

Parameters:
  • next_token (string) – The ID of the page in the paginated results.
  • limit (integer) – The number of pages of information to include in the result. The range of acceptable values is 1 through 100. The default value is 100.
get_batch_prediction(batch_prediction_id)

Returns a BatchPrediction that includes detailed metadata, status, and data file information for a Batch Prediction request.

Parameters:batch_prediction_id (string) – An ID assigned to the BatchPrediction at creation.
get_data_source(data_source_id, verbose=None)

Returns a DataSource that includes metadata and data file information, as well as the current status of the DataSource.

GetDataSource provides results in normal or verbose format. The verbose format adds the schema description and the list of files pointed to by the DataSource to the normal format.

Parameters:
  • data_source_id (string) – The ID assigned to the DataSource at creation.
  • verbose (boolean) – Specifies whether the GetDataSource operation should return DataSourceSchema.

If true, DataSourceSchema is returned.

If false, DataSourceSchema is not returned.

get_evaluation(evaluation_id)

Returns an Evaluation that includes metadata as well as the current status of the Evaluation.

Parameters:evaluation_id (string) – The ID of the Evaluation to retrieve. The evaluation of each MLModel is recorded and cataloged. The ID provides the means to access the information.
get_ml_model(ml_model_id, verbose=None)

Returns an MLModel that includes detailed metadata, and data source information as well as the current status of the MLModel.

GetMLModel provides results in normal or verbose format.

Parameters:
  • ml_model_id (string) – The ID assigned to the MLModel at creation.
  • verbose (boolean) – Specifies whether the GetMLModel operation should return Recipe.

If true, Recipe is returned.

If false, Recipe is not returned.

make_request(action, body, host=None)

Makes a request to the server, with stock multiple-retry logic.

predict(ml_model_id, record, predict_endpoint)

Generates a prediction for the observation using the specified MLModel.

Not all response parameters will be populated because this is dependent on the type of requested model.

Parameters:
  • ml_model_id (string) – A unique identifier of the MLModel.
  • record (map) – A map of variable name-value pairs that represent an observation.
  • predict_endpoint (string) – The endpoint to send the predict request to.
update_batch_prediction(batch_prediction_id, batch_prediction_name)

Updates the BatchPredictionName of a BatchPrediction.

You can use the GetBatchPrediction operation to view the contents of the updated data element.

Parameters:
  • batch_prediction_id (string) – The ID assigned to the BatchPrediction during creation.
  • batch_prediction_name (string) – A new user-supplied name or description of the BatchPrediction.
update_data_source(data_source_id, data_source_name)

Updates the DataSourceName of a DataSource.

You can use the GetDataSource operation to view the contents of the updated data element.

Parameters:
  • data_source_id (string) – The ID assigned to the DataSource during creation.
  • data_source_name (string) – A new user-supplied name or description of the DataSource that will replace the current description.
update_evaluation(evaluation_id, evaluation_name)

Updates the EvaluationName of an Evaluation.

You can use the GetEvaluation operation to view the contents of the updated data element.

Parameters:
  • evaluation_id (string) – The ID assigned to the Evaluation during creation.
  • evaluation_name (string) – A new user-supplied name or description of the Evaluation that will replace the current content.
update_ml_model(ml_model_id, ml_model_name=None, score_threshold=None)

Updates the MLModelName and the ScoreThreshold of an MLModel.

You can use the GetMLModel operation to view the contents of the updated data element.

Parameters:
  • ml_model_id (string) – The ID assigned to the MLModel during creation.
  • ml_model_name (string) – A user-supplied name or description of the MLModel.
  • score_threshold (float) – The ScoreThreshold used in binary classification MLModel that marks the boundary between a positive prediction and a negative prediction.
Output values greater than or equal to the ScoreThreshold receive a
positive result from the MLModel, such as True. Output values less than the ScoreThreshold receive a negative response from the MLModel, such as False.

boto.machinelearning.exceptions

exception boto.machinelearning.exceptions.IdempotentParameterMismatchException(status, reason, body=None, *args)
exception boto.machinelearning.exceptions.InternalServerException(status, reason, body=None, *args)
exception boto.machinelearning.exceptions.InvalidInputException(status, reason, body=None, *args)
exception boto.machinelearning.exceptions.LimitExceededException(status, reason, body=None, *args)
exception boto.machinelearning.exceptions.PredictorNotMountedException(status, reason, body=None, *args)
exception boto.machinelearning.exceptions.ResourceInUseException(status, reason, body=None, *args)
exception boto.machinelearning.exceptions.ResourceNotFoundException(status, reason, body=None, *args)