文档

TensorFlow服务请求构造

更新时间:

本文为您介绍如何为基于通用Processor的TensorFlow服务构造请求数据。

输入数据

EAS预置了TensorFlow Processor,为保证性能,其输入输出为ProtoBuf格式。

调用案例

EAS在华东2(上海)的VPC环境中部署了一个Public的测试案例,其服务名称为mnist_saved_model_example,访问Token为空。您可以通过URLhttp://pai-eas-vpc.cn-shanghai.aliyuncs.com/api/predict/mnist_saved_model_example访问该服务。具体方式如下:

  1. 获取模型信息。

    通过GET请求可以获取模型的相关信息,包括signature_namenametypeshape,示例如下。

    $curl http://pai-eas-vpc.cn-shanghai.aliyuncs.com/api/predict/mnist_saved_model_example | python -mjson.tool
    {
        "inputs": [
            {
                "name": "images",
                "shape": [
                    -1,
                    784
                ],
                "type": "DT_FLOAT"
            }
        ],
        "outputs": [
            {
                "name": "scores",
                "shape": [
                    -1,
                    10
                ],
                "type": "DT_FLOAT"
            }
        ],
        "signature_name": "predict_images"
    }

    该模型是一个MNIST数据集(下载MNIST数据集)分类模型。输入数据为DT_FLOAT类型,例如shape[-1,784],其中第一维表示batch_size(如果单个请求只包含一张图片,则batch_size为1),第二维表示784维的向量。因为训练该测试模型时,将其输入展开成了一维,所以单张图片输入也需要变换为28*28=784的一维向量。构建输入时,无论shape取值如何,都必须将输入展开成一维向量。该示例中,如果输入单张图片,则输入为1*784的一维向量。如果训练模型时输入的shape[-1, 28, 28],则构建输入时就需要将输入构建为1*28*28的一维向量。如果服务请求中指定的shape与模型的shape不一致,则预测请求报错。

  2. 安装ProtoBuf并调用服务(以Python 2为例,介绍如何对TensorFlow服务进行调用)。

    EAS为Python预先生成了ProtoBuf包,您可以使用如下命令直接安装。

    $ pip install http://eas-data.oss-cn-shanghai.aliyuncs.com/sdk/pai_tf_predict_proto-1.0-py2.py3-none-any.whl

    调用服务进行预测的Python 2示例代码如下。

    #!/usr/bin/env python
    # -*- coding: UTF-8 -*-
    import json
    from urlparse import urlparse
    from com.aliyun.api.gateway.sdk import client
    from com.aliyun.api.gateway.sdk.http import request
    from com.aliyun.api.gateway.sdk.common import constant
    from pai_tf_predict_proto import tf_predict_pb2
    import cv2
    import numpy as np
    with open('2.jpg', 'rb') as infile:
        buf = infile.read()
        # 使用numpy将字节流转换成array。
        x = np.fromstring(buf, dtype='uint8')
        # 将读取到的array进行图片解码获得28 × 28的矩阵。
        img = cv2.imdecode(x, cv2.IMREAD_UNCHANGED)
        # 因为预测服务API需要长度为784的一维向量,所以将矩阵reshape成784。
        img = np.reshape(img, 784)
    def predict(url, app_key, app_secret, request_data):
        cli = client.DefaultClient(app_key=app_key, app_secret=app_secret)
        body = request_data
        url_ele = urlparse(url)
        host = 'http://' + url_ele.hostname
        path = url_ele.path
        req_post = request.Request(host=host, protocol=constant.HTTP, url=path, method="POST", time_out=6000)
        req_post.set_body(body)
        req_post.set_content_type(constant.CONTENT_TYPE_STREAM)
        stat,header, content = cli.execute(req_post)
        return stat, dict(header) if header is not None else {}, content
    def demo():
        # 输入模型信息,单击模型名称即可获取。
        app_key = 'YOUR_APP_KEY'
        app_secret = 'YOUR_APP_SECRET'
        url = 'YOUR_APP_URL'
        # 构造服务。
        request = tf_predict_pb2.PredictRequest()
        request.signature_name = 'predict_images'
        request.inputs['images'].dtype = tf_predict_pb2.DT_FLOAT  # images参数类型。
        request.inputs['images'].array_shape.dim.extend([1, 784])  # images参数的形状。
        request.inputs['images'].float_val.extend(img)  # 数据。
        request.inputs['keep_prob'].dtype = tf_predict_pb2.DT_FLOAT  # keep_prob参数的类型。
        request.inputs['keep_prob'].float_val.extend([0.75])  # 默认填写一个。
        # 将ProtoBuf序列化成string进行传输。
        request_data = request.SerializeToString()
        stat, header, content = predict(url, app_key, app_secret, request_data)
        if stat != 200:
            print 'Http status code: ', stat
            print 'Error msg in header: ', header['x-ca-error-message'] if 'x-ca-error-message' in header else ''
            print 'Error msg in body: ', content
        else:
            response = tf_predict_pb2.PredictResponse()
            response.ParseFromString(content)
            print(response)
    if __name__ == '__main__':
        demo()

    该示例的输出如下。

    outputs {
      key: "scores"
      value {
        dtype: DT_FLOAT
        array_shape {
          dim: 1
          dim: 10
        }
        float_val: 0.0
        float_val: 0.0
        float_val: 1.0
        float_val: 0.0
        float_val: 0.0
        float_val: 0.0
        float_val: 0.0
        float_val: 0.0
        float_val: 0.0
        float_val: 0.0
      }
    }

    其中outputs为10个类别对应的得分,即输入图片为2.jpg时,除value[2]外,其他均为0。因此最终预测结果为2,预测结果正确。

其它语言的调用方法

除Python外,使用其它语言客户端调用服务都需要根据.proto文件手动生成预测的请求代码文件。调用示例如下:

  1. 编写请求代码文件(例如创建tf.proto文件),内容如下。

    syntax = "proto3";
    option cc_enable_arenas = true;
    option java_package = "com.aliyun.openservices.eas.predict.proto";
    option java_outer_classname = "PredictProtos";
    enum ArrayDataType {
      // Not a legal value for DataType. Used to indicate a DataType field
      // has not been set.
      DT_INVALID = 0;
      // Data types that all computation devices are expected to be
      // capable to support.
      DT_FLOAT = 1;
      DT_DOUBLE = 2;
      DT_INT32 = 3;
      DT_UINT8 = 4;
      DT_INT16 = 5;
      DT_INT8 = 6;
      DT_STRING = 7;
      DT_COMPLEX64 = 8;  // Single-precision complex.
      DT_INT64 = 9;
      DT_BOOL = 10;
      DT_QINT8 = 11;     // Quantized int8.
      DT_QUINT8 = 12;    // Quantized uint8.
      DT_QINT32 = 13;    // Quantized int32.
      DT_BFLOAT16 = 14;  // Float32 truncated to 16 bits.  Only for cast ops.
      DT_QINT16 = 15;    // Quantized int16.
      DT_QUINT16 = 16;   // Quantized uint16.
      DT_UINT16 = 17;
      DT_COMPLEX128 = 18;  // Double-precision complex.
      DT_HALF = 19;
      DT_RESOURCE = 20;
      DT_VARIANT = 21;  // Arbitrary C++ data types.
    }
    // Dimensions of an array.
    message ArrayShape {
      repeated int64 dim = 1 [packed = true];
    }
    // Protocol buffer representing an array.
    message ArrayProto {
      // Data Type.
      ArrayDataType dtype = 1;
      // Shape of the array.
      ArrayShape array_shape = 2;
      // DT_FLOAT.
      repeated float float_val = 3 [packed = true];
      // DT_DOUBLE.
      repeated double double_val = 4 [packed = true];
      // DT_INT32, DT_INT16, DT_INT8, DT_UINT8.
      repeated int32 int_val = 5 [packed = true];
      // DT_STRING.
      repeated bytes string_val = 6;
      // DT_INT64.
      repeated int64 int64_val = 7 [packed = true];
      // DT_BOOL.
      repeated bool bool_val = 8 [packed = true];
    }
    // PredictRequest specifies which TensorFlow model to run, as well as
    // how inputs are mapped to tensors and how outputs are filtered before
    // returning to user.
    message PredictRequest {
      // A named signature to evaluate. If unspecified, the default signature
      // will be used.
      string signature_name = 1;
      // Input tensors.
      // Names of input tensor are alias names. The mapping from aliases to real
      // input tensor names is expected to be stored as named generic signature
      // under the key "inputs" in the model export.
      // Each alias listed in a generic signature named "inputs" should be provided
      // exactly once in order to run the prediction.
      map<string, ArrayProto> inputs = 2;
      // Output filter.
      // Names specified are alias names. The mapping from aliases to real output
      // tensor names is expected to be stored as named generic signature under
      // the key "outputs" in the model export.
      // Only tensors specified here will be run/fetched and returned, with the
      // exception that when none is specified, all tensors specified in the
      // named signature will be run/fetched and returned.
      repeated string output_filter = 3;
    }
    // Response for PredictRequest on successful run.
    message PredictResponse {
      // Output tensors.
      map<string, ArrayProto> outputs = 1;
    }

    其中PredictRequest定义TensorFlow服务的输入格式,PredictResponse定义服务的输出格式。关于ProtoBuf的详细用法请参见ProtoBuf介绍

  2. 安装protoc。

    #/bin/bash
    PROTOC_ZIP=protoc-3.3.0-linux-x86_64.zip
    curl -OL https://github.com/google/protobuf/releases/download/v3.3.0/$PROTOC_ZIP
    unzip -o $PROTOC_ZIP -d ./ bin/protoc
    rm -f $PROTOC_ZIP
  3. 生成请求代码文件:

    • Java版本

      $ bin/protoc --java_out=./ tf.proto

      命令执行完成后,系统会在当前目录生成com/aliyun/openservices/eas/predict/proto/PredictProtos.java,在项目中导入该文件即可。

    • Python版本

      $ bin/protoc --python_out=./ tf.proto

      命令执行完成后,系统会在当前目录生成tf_pb2.py,通过import命令导入该文件即可。

    • C++版本

      $ bin/protoc --cpp_out=./ tf.proto

      命令执行完成后,系统在当前目录生成tf.pb.cctf.pb.h。在代码中使用include tf.pb.h命令,并将tf.pb.cc添加至compile列表即可。

  • 本页导读 (1)
文档反馈