Simple HTTP with Retrofit 2

Retrofit has been simplifying HTTP calls for years, and this v2.0 is no different. In addition to fixing some long-standing annoyances, there are a handful of new features which make it more powerful than ever. In this talk from Droidcon NYC, Jake Wharton covers those new features as well as the integration of OkHttp and Okio APIs to ensure a full understanding of the HTTP stack.


Introduction (0:00)

My name is Jake Wharton and I work for Square. A naive man once said, “Retrofit 2 will be out by the end of this year.” That man, of course, was me at Droidcon New York last year. However, Retrofit 2 will be out by the end of this year, and I’m committing to that!

Retrofit was put in the open about five years ago, making it one of Square’s oldest open source projects. It actually started as a grab bag of different tools that we were using in our open source app: it had a shake detector inside it, then it was an HTTP client, then it had what is now the tape library. Most of this was created by Bob Lee, but I took stewardship of the library about three years ago. We finally got to a 1.0 after three years of being open source, part of which included our seven days of open source leading up to Google IO two years ago. Since then, we’ve had 18 releases in two years.

Retrofit 1: The Good (2:23)

There are already lots of great things inside Retrofit. It uses interfaces and method and parameter annotations to declaratively define how requests are created. Here is an example of how it can talk to the GitHub API:

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  List<Contributor> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
}

The HTTP client that backed Retrofit and the serialization mechanism (JSON / XML protocol buffers) was pluggable, so you could pick and choose your own. At the time that Retrofit came out, it only worked with Apache’s HTTP client. Before 1.0 was released, we added support for URL connection and, of course, OkHttp support. The nice thing was that if you had something else that you wanted to plug in, you could back it by whatever crazy HTTP client other than these three that you had. This worked really well because after a couple years, we were able to add app engine support which uses a custom client.

builder.setClient(new UrlConnectionClient());
builder.setClient(new ApacheClient());
builder.setClient(new OkClient());

builder.setClient(new CustomClient());

Serialization was also pluggable. By default it used GSON, but you could actually replace that with Jackson if you were using JSON.

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  List<Contributor> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
}

builder.setConverter(new GsonConverter());
builder.setConverter(new JacksonConverter());

If you were doing something like protocol buffers, we had both wire and Google’s protobuf converters, and of course you could use XML (if you don’t like yourself).

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  ContributorResponse repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
}

builder.setConverter(new ProtoConverter());
builder.setConverter(new WireConverter());

builder.setConverter(new SimpleXMLConverter());

builder.setConverter(new CustomConverter());

This is the same as the client where this was pluggable, so if you wanted to bring your own serialization library or just wanted to do something custom, you were able to do that.

In making the requests, you could do this multiple ways: we had synchronous…

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  List<Contributor> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
} 

List<Contributor> contributors =
    gitHubService.repoContributors("square", "retrofit");

…asynchronous, where you would specify a callback as the last parameter…

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  void repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo,
      Callback<List<Contributor>> cb);
} 

service.repoContributors("square", "retrofit", new Callback<List<Contributor>>() {
  @Override void success(List<Contributor> contributors, Response response) {
    // ...
  }

  @Override void failure(RetrofitError error) {
    // ...
  }
});

…and finally we added RxJava support post 1.0, which became quite a popular choice.

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  Observable<List<Contributor>> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
} 

gitHubService.repoContributors("square", "retrofit")
    .subscribe(new Action1<List<Contributor>>() {
      @Override public void call(List<Contributor> contributors) {
        // ...
      }
    });

Retrofit 1: The Not-So-Good (4:58)

Sadly, no library is without its faults, and Retrofit is no exception. The number of classes we had to embed in the library’s public API in order to support the pluggable client became something of a pain, partly because the library is very fragile, and also because we couldn’t change public APIs. Instead, we had our own request and response types which had URLs, headers, response codes, messages, etc. Then, for supporting the representation of the body of requests and responses, we had a typed input and typed output which basically tied a content type and a length to the body itself so we could read it and write it. There were also a bunch of implementations of typed input and typed output on the public API that we had to support and could not change.

Get more development news like this

If you wanted to access data from the response, such as a header or the URL, but you also wanted access to the deserialized body, this was not possible.

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  List<Contributor> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);

  @GET("/repos/{owner}/{repo}/contributors")
  Response repoContributors2(
      @Path("owner") String owner,
      @Path("repo") String repo);
}

In this GitHub example, we’re returning a list of contributors, and this will be deserialized with whatever your converter is. However, say you need access to a header from this response. You weren’t able to actually get that unless you specified an endpoint that returned this response object. Since the response object did not actually have a deserialized body inside of it, you can’t get that list of contributor without doing that deserialization yourself in the consuming code.

I talked about the synchronous, asynchronous, and RxJava mechanisms of execution being a good thing, and they were. However, their implementation led to some rigidity. If we had different parts of our code that some wanted to call synchronously and others wanted to call asynchronously, you had to have two method definitions for that:

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  List<Contributor> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);

  @GET("/repos/{owner}/{repo}/contributors")
  void repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo,
      Callback<List<Contributor>> cb);
}

There was a similar issue with RxJava. While RxJava thankfully allows you to do both synchronous and asynchronous with one definition, we still had to bake support for RxJava into the core of the library in order to allow you to return these observable objects.

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  Observable<List<Contributor>> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
}

We also had to know how to create the observables inside Retrofit. But what if you wanted something else? We were certainly not going to embed support for Guava’s ListenableFuture, for instance, or CompleteableFuture for those using Java 8. Retrofit 1 was built for Java 6 and Android, but we couldn’t reference these classes (not that we would want to bake in that support in the core, anyway).

The way the converters work was actually slightly inefficient. This was the API for creating your own converter, and it’s extremely simple:

interface Converter {
  Object fromBody(TypedInput body, Type type);
  TypedOutput toBody(Object object);
}

You take an object and turn it into an HTTP representation, which you then turn back into an object. The problem is that when this is called, we say, “Here’s the response and here’s the type that we want you to convert it to,” and the converter has to figure out how to deserialize that, which is heavyweight process inside of these libraries. This step where we’re creating these internal representations of how to do serialization is really slow. Even though some of these libraries cache these objects, having to look them up every single time you want to do a deserialization was an inefficient process.

interface GitHubService {
  @GET("/search/repositories")
  RepositoriesResponse searchRepos(
      @Query("q") String query,
      @Query("since") Date since);
}

/search/repositories?q=retrofit&since=2015-08-27
/search/repositories?q=retrofit&since=20150827

One of the nice things about how you define your APIs and interfaces is that you can use the objects you use every day to create these requests. It’s just another method that you’re calling, only it happens to be backed by an HTTP call. However, the problem was that we severely limited by how we could use objects that weren’t so-called “primitives”.

Say we have an endpoint to which we want to pass a date. A date, obviously, is a normal object. If you want to pass it to one of these methods, you are only really able to call two string. However, the URLs and APIs that you’re calling might want separate representations. They might want to take that date and format in a different way (especially true for objects that are more complex than date). We really had no story beyond two string for that.

That’s all of what’s wrong with Retrofit 1. How are we going to fix it?

Retrofit 2 (10:18)

With Retrofit 2, we hope to address all of the problems that have been brought to our attention over the years by people using Retrofit 1.

Call (10:30)

To start off, we have a new type. If you’re familiar with making API calls with OkHttp, you may know that it has a class called call. We now have a call inside of Retrofit 2. This has basically the exact same semantics, except it knows how to do things like deserialization. It knows how to take that HTTP response and turn it into the list of contributors, whereas OkHttp just wants to give you back the raw body.

call models a single request/response pair. These are one shot things, and they’re created for each endpoint. Conveniently, this gives you the ability to separate the creation of the call. You can now create this call object and hand it to another class and have that separation of concerns.

Each call instance can only be used once, so it’s a single request/response pair. However, just as with OkHttp’s call, you can actually call the clone method — the Java clone. We’ve implemented this to create a new instance that allows you to make subsequent calls, making this a very cheap operation. You could, for example, create a call object and then just always clone it before you make a request to ensure that it has not already been executed.

Another big advantage is that it supports both synchronous and asynchronous execution in a single type. Also, it can actually be canceled, which is great. This will actually call through to the underlying HTTP client and cancel the request. If it’s in flight, it will actually un-hook itself from the server. Or, if it hasn’t actually executed yet because it’s asynchronous, it never actually will. Let’s take a look at what that looks like.

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  Call<List<Contributor>> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
}

Call<List<Contributor>> call =
    gitHubService.repoContributors("square", "retrofit");

This call object is parameterized, and this is just what you return from your API methods in your interface. Call the method the same, and you get an instance of it. We can execute this, but to reiterate, these can only be used once.

Call<List<Contributor>> call =
    gitHubService.repoContributors("square", "retrofit");

response = call.execute();

// This will throw IllegalStateException:
response = call.execute();

Call<List<Contributor>> call2 = call.clone();
// This will not throw:
response = call2.execute();

It fails after trying to execute it twice. However, you can clone these instances, which this is very cheap, so you can always clone if you’re using these multiple times or maybe you’re just calling the method every single time.

Asynchronously is done through an enqueue method. We had execute for synchronous and queue for asynchronous:

Call<List<Contributor>> call =
    gitHubService.repoContributors("square", "retrofit");

call.enqueue(new Callback<List<Contributor>>() {
  @Override void onResponse(/* ... */) {
    // ...
  }

  @Override void onFailure(Throwable t) {
    // ...
  }
}); 

After you enqueue to put something asynchronous, or even if you’re executing synchronously, you can cancel these requests and it actually cancels:

Call<List<Contributor>> call =
    gitHubService.repoContributors("square", "retrofit"); 

call.enqueue(         );
// or... 
call.execute();

// later...
call.cancel();

Parameterized Response Object (13:48)

Another new feature is this parameterized Response type. Response is going to give you the metadata that we were sorely missing: the response code, the response message, and access to the headers.

class Response<T> {
  int code();
  String message();
  Headers headers();

  boolean isSuccess(); 
  T body();
  ResponseBody errorBody(); 
  com.squareup.okhttp.Response raw();
}

We have a convenience method for determining whether or not the request was successful, basically just a check against code for 200. Then we just have the body and there’s a separate method for accessing the error body. The use of these methods corresponds to the return type of that boolean. Only if a response was successful do we actually do the deserialization and place it in this body call back. If that is success method returns false, then we can’t really know anything about what the type of response is. We then hand you this ResponseBody type which basically just encapsulates the content type, the length, and the raw body that you can interpret as you want.

Those are the two big changes in how your interfaces are defined.

Dynamic URL Parameter (16:33)

A big problem that has pained me for a couple years is a dynamic URL parameter, but we have now fixed it! If we make requests to GitHub and we get back a response, that response is actually going to include a header that looks like this, which is kind of ugly:

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  Call<List<Contributor>> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
} 

Call<List<Contributor>> call = 
    gitHubService.repoContributors("square", "retrofit");
Response<List<Contributor>> response = call.execute(); 

// HTTP/1.1 200 OK 
// Link: <https://api.github.com/repositories/892275/contributors?
page=2>; rel="next", <https://api.github.com/repositories/892275/
contributors?page=3>; rel="last" 
// ...

This is going to dictate the URLs you should use if you want to do pagination. Of course, it will not return the full list, it’ll only return you the first 20 or so. Before, we really had no way of executing these subsequent requests using this header that GitHub wants you to use, because they can do smart things like have that data still cached in memory, so when you make the request it points you to the exact same server. They don’t have to pay the cost of figuring all that stuff out from the database.

With our new response type, not only like I said we’re getting the metadata, so not only are we getting a list of contributors, but we can actually look at this header and we can write some hypothetical method which is going to yank out the link to the next page.

Response<List<Contributor>> response = call.execute();

// HTTP/1.1 200 OK
// Link: <https://api.github.com/repositories/892275/contributors?
page=2>; rel="next", <https://api.github.com/repositories/892275/
contributors?page=3>; rel="last"
// ... 

String links = response.headers().get("Link");
String nextLink = nextFromGitHubLinks(links); 

// https://api.github.com/repositories/892275/contributors?page=2

You can see that this is actually a little bit different. This is a slightly different URL than the one in the interface.

The way that we’ve addressed this is by allowing these dynamic URLs to be used in follow up requests. Now, you have to define a separate interface method for this. This is a fundamental requirement because it’s a slightly different type of request. In the initial one, you’re choosing what owner and what repo to look at, and you’re making that initial request with those model types. The follow up, though, is really a fundamentally different request because that information is already encoded in the follow up link. We wouldn’t want to put this this URL parameter on the same method because it really doesn’t make sense. Instead, you define a separate method.

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  Call<List<Contributor>> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);

  @GET
  Call<List<Contributor>> repoContributorsPaginate(
      @Url String url);
}

We have this new @Url annotation which allows you to pass in that URL. You’ll see that we’ve also made the relative path that was on Git optional, so you have to leave that off.

With this follow up method, we can take that link and we can call this second paginate method to get subsequent calls.

String nextLink = nextFromGitHubLinks(links); 

// https://api.github.com/repositories/892275/contributors?page=2 

Call<List<Contributor>> nextCall = 
    gitHubService.repoContributorsPaginate(nextLink);

This is going to make the request to page 2, which is also going to have a header that would allow us to do page 3. We can basically keep doing that using the paginate method, which will allow you to get those subsequent pages. You see this a lot in certain APIs and the absence of having this in Retrofit 1 was a really huge problem for a lot of people.

Multiple, Efficient Converters (19:31)

Retrofit 1 had a converter problem. Really, it wasn’t much a problem for a lot of people, but it was a problem internal to the library. In Retrofit 2, we have addressed it, and we’re also allowing multiple converters.

Before, if you wanted to make calls to an API that had a JSON response and then a separate API call that had a proto response, the only way to do that was to separate those into separate service declarations.

interface SomeProtoService {
  @GET("/some/proto/endpoint")
  Call<SomeProtoResponse> someProtoEndpoint();
}

interface SomeJsonService {
  @GET("/some/json/endpoint")
  Call<SomeJsonResponse> someJsonEndpoint();

That’s because there was only one converter, and it was specified on the REST adapter object. We wanted to reconcile this because these interface declarations should be semantic. They should group together APIs, which operate on the same thing, like an account service, a user service, or a Twitter service. The fact that some URLs might return different response serialization formats is really not your problem to organize in your services, but something we should take care of.

Now, you can now collapse these into the same service:

interface SomeService {
  @GET("/some/proto/endpoint")
  Call<SomeProtoResponse> someProtoEndpoint();

  @GET("/some/json/endpoint")
  Call<SomeJsonResponse> someJsonEndpoint();
}

I want to walk through how that works, because understanding how we know which converter is used will play into how you write your code. This first method returns hypothetical proto object.

SomeProtoResponse —> Proto? Yes!

We’re just going to ask each of these converters whether they can handle a type or not. We ask the proto converter, “Hey, can you handle this SomeProtoResponse?”, and it does whatever it needs to do to determine if it can handle it. Protocol buffers all extend from the same class. In protobuff it’s called message or message lite, and in wire it’s called message. It’s basically just determining whether or not this class extends from message, and if it does, it says yes.

Now, for the JSON one, we’re going to ask the proto converter, and it’s going to see that it doesn’t extend from message, so it’s going to say no. It will then just move on to the next converter which is, the JSON one. It says that JSON can be used.

SomeJsonResponse —> Proto? No! —> JSON? Yes!

There’s really no restraint or requirement about the hierarchy, so we can’t really know whether something can be JSON or not, so they JSON converters always say yes. That’s important to note, because usually that means they have to be last.

Another important note is that we no longer ship a default converter. By default, then, you can’t use Retrofit without explicitly telling it what converters are available for it to use. There are no dependencies on serialization mechanism in the core, so you actually have to bring that in yourself. We still provide the converters, but you have to add the explicit dependency, and you have to explicitly tell Retrofit to use the converter.

Multiple, Pluggable Execution Mechanisms (22:38)

Before, we had rigidity of the execution mechanism. We’ve now fixed that and we’ve made it pluggable, and now we allow multiple of them. This is similar to how the converters work.

For example, say you have a method that returns call, and call is the converter that’s built in, i.e. Retrofit 2’s native execution mechanism.

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  Call<List<Contributor>> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);

...

Now, you can bring your own, or use one of the ones that we provide:

...

  @GET("/repos/{owner}/{repo}/contributors")
  Observable<List<Contributor>> repoContributors2(
      @Path("owner") String owner,
      @Path("repo") String repo);

  @GET("/repos/{owner}/{repo}/contributors")
  Future<List<Contributor>> repoContributors3(
      @Path("owner") String owner,
      @Path("repo") String repo);
}

We still have an RxJava one, but it is now separate. (Or, if you hate yourself and you like Futures, you can write your own to do that as well.) How does this work?

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  Call<List<Contributor>> repoContributors(..);

  @GET("/repos/{owner}/{repo}/contributors")
  Observable<List<Contributor>> repoContributors2(..);

  @GET("/repos/{owner}/{repo}/contributors")
  Future<List<Contributor>> repoContributors3(..);
}

We basically look at the response type. For call, we would just ask the first execution mechanism, “Hey, do you know how to handle call?” If it’s the RxJava one, it says no because it’s not an observable. We then move on to the internal converter which says, “Yes, this is a call.”

call —> RxJava? No! —> Call? Yes!

This works similarly for observable: we would just ask the RxJava one, and it says yes:

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  Call<List<Contributor>> repoContributors(..);

  @GET("/repos/{owner}/{repo}/contributors")
  Observable<List<Contributor>> repoContributors2(..);

  @GET("/repos/{owner}/{repo}/contributors")
  Future<List<Contributor>> repoContributors3(..);
}

Observable —> RxJava? Yes!

If you don’t have one installed, this means we can’t do validation of the types. If you ask for Future, both of these are going to say no.

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  Call<List<Contributor>> repoContributors(..);

  @GET("/repos/{owner}/{repo}/contributors")
  Observable<List<Contributor>> repoContributors2(..);

  @GET("/repos/{owner}/{repo}/contributors")
  Future<List<Contributor>> repoContributors3(..);
}

Future —> RxJava? No! —> Call? No! —> Throw!

This will throw an exception, which means that you either need to change the type or install a mechanism. Mechanisms are discussed further below.

Powered by OkHttp (24:17)

Retrofit 2 now depends on OkHttp, and the HTTP client is no longer pluggable. I realize this is a controversial thing, but hopefully I’ll be able to prove to you why this is the right decision.

In 2012, even before Retrofit 1.0’s release, we needed client abstractions. There’s an infamous blog post by Jesse Wilson which discussed all of the Apache and URL connection clients, including how to choose between them, and it came with all of these ridiculous caveats.

In 2012, we needed request/response abstractions. Apache had them, because Apache has always had a very object oriented model for making requests, but URL connection didn’t. URL connection has a ton of APIs and it’s very stateful, and we needed to make use of these more object oriented abstractions to even have the client abstraction.

In 2012, we needed the header abstraction. Again, Apache had this, but URL connection didn’t. Instead, it just used APIs and strings, and we needed something to represent these individual headers.

But it’s not 2012 anymore – it’s 2015. OkHttp is now small and focused, and it has a really great API. We are essentially mirroring a lot of it at a higher level in Retrofit 2, and it has all the features that we need, including all of these abstractions. They’re great, and they’re very correct. This has been a big win for the size of Retrofit. We basically tore out about 60% of Retrofit, yet we now have so many more features.

So, yes, you do have to include OkHttp, because it’s now a required dependency. However, I’m willing to bet that a lot of you were already using it, and you’re going to see that because of this, Retrofit is actually a lot better.

Powered by OkHttp (and Okio!) (26:20)

A great thing about using OkHttp is that we can expose it in Retrofit 2’s public API. You may have seen response body on the error body method and response. Obviously, we return the raw OkHttp response in the Retrofit’s response object. We’re exposing these types, and it basically replaces all those ones I showed earlier only with much nicer, cleaner API’s.

A really tiny IO library called Okio sits beneath OkHttp. I gave a talk at Droidcon Montreal about it, where I discussed the motivations of why these are good choices, how they’re extremely efficient and why you should be using them. I also mentioned Retrofit 2 in that talk, which, at the time, was mostly hypothetical. Now that Retrofit 2 is actually real, take the time to watch the talk!

The Efficiency of Retrofit 2 (27:31)

I made this crazy graph to show you how Retrofit is actually so much more efficient than Retrofit 1 or potential other solutions, thanks to this hard dependency and these abstractions. I walk you through the graph in the video above, so be sure to watch this section of my talk!

Setup - Retrofit Type (31:24)

Now, let’s look at the actual Retrofit type which replaced the REST adapter type and how that gets set up. The old method was called endpoint, but now we just call it a baseUrl. It’s the URL of the server that you’re talking to, so in this case we’re just talking to GitHub:

Retrofit retrofit = new Retrofit.Builder()
    .baseUrl("https://api.github.com")
    .build(); 

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  Call<List<Contributor>> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
} 

GitHubService gitHubService = retrofit.create(GitHubService.class);

Here, we have our interface. We just called this create method, which is the same as in Retrofit 1. It’s going to generate an implementation of our interface on which we can call methods.

When we call this method, repoContributors, Retrofit is going to create this URL. So if we passed it Square and Retrofit as the owner and repository, respectively, we’re going to get back this URL: https://api.github.com/repos/square/retrofit/contributors. Internally, Retrofit is going to use OkHttp’s HTTP URL type for the base URL and then, the method resolve is going to take that relative path and will resolve it into a full URL that we can then make the request to. That’s how you get this full URL. This is important to understand, because there’s a huge semantic change will likely affect how you write these relative URLs. I’m going to show that by changing the API to have a suffix of, say v3:

Retrofit retrofit = new Retrofit.Builder()
    .baseUrl("https://api.github.com/v3/")
    .build();

interface GitHubService {
  @GET("/repos/{owner}/{repo}/contributors")
  Call<List<Contributor>> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
}

Although this is not GitHub’s actual API, there are a ton of APIs out there that have these suffixes and these paths. If you were to call this same method, the URL that’s going to be resolved looks like this: https://api.github.com/repos/square/retrofit/contributors. You’ll notice there is no v3 after the host because the relative URL starts with a slash. Retrofit 1 forced you to have that leading slash just for semantic purposes, but we always appended it to the endpoint. Now, with these base and relative URLs, we’re using this resolve method. If you’ve ever written an anchor tag with HREF in HTML, it basically works the same way. If you put that leading slash, that means it’s an absolute path which is going to start from the host. However, if you omit that slash…

interface GitHubService {
  @GET("repos/{owner}/{repo}/contributors")
  Call<List<Contributor>> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
}

…it’s now fully relative and it’s just going to go from whatever the current path is and resolve it from there. By removing that leading URL, you’re going to get the correct full URL which includes that v3 path:

Retrofit retrofit = new Retrofit.Builder()
    .baseUrl("https://api.github.com/v3/")
    .build(); 

interface GitHubService {
  @GET("repos/{owner}/{repo}/contributors")
  Call<List<Contributor>> repoContributors(
      @Path("owner") String owner,
      @Path("repo") String repo);
}

// https://api.github.com/v3/repos/square/retrofit/contributors

Since we are now depending on OkHttp, we don’t have the client abstraction, but we of course still let you hand us a client instance. It’s just now an OkHttp client which is on OkHttp’s type. This allows you to do things like configure interceptors, an SSL socket factory, or timeouts. (OkHttp has default timeouts, so you don’t actually have to set ones if you don’t need anything custom, but if you did want to set them, this would be how you do it.)

OkHttpClient client = new OkHttpClient();
client.interceptors().add(..);

Retrofit retrofit = new Retrofit.Builder()
    .baseUrl("https://api.github.com")
    .client(client)
    .build();

This is also where you’re going to specify the converters and execution mechanisms for things like RxJava. We may have a converter for GSON, and we can have multiple. We can add a converter for protocol buffers:

Retrofit retrofit = new Retrofit.Builder()
    .baseUrl("https://api.github.com")
    .addConverterFactory(GsonConverterFactory.create())
    .addConverterFactory(ProtoConverterFactory.create())
    .build();

I want to stress that the order matters. This is the order in which we’re going to ask each one whether or not it can handle a type. What I have written above is actually wrong. If we ever specify a proto, it’s going to be encoded as JSON, which will try and deserialize the response buddy as JSON. That’s obviously not what we want. We will have to flip these because we want to check protocol buffers first, and then JSON through GSON:

Retrofit retrofit = new Retrofit.Builder()
    .baseUrl("https://api.github.com")
    .addConverterFactory(ProtoConverterFactory.create())
    .addConverterFactory(GsonConverterFactory.create())
    .build();

Admittedly, this is not yet well documented in Retrofit, but these are some tips. If you want to use RxJava instead of call, you need a call adapter factory:

Retrofit retrofit = new Retrofit.Builder()
    .baseUrl("https://api.github.com")
    .addConverterFactory(ProtoConverterFactory.create())
    .addConverterFactory(GsonConverterFactory.create())
    .addCallAdapterFactory(RxJavaCallAdapterFactory.create())
    .build();

This is something that knows how to take the call instance and turn it into something else; it adapts it into another type. Right now, we only have one for RxJava, which does observables, and it will also do the new single type, the experimental single. If you know RxJava, there’s a new type which represents an observable that only ever emits one item. You can use either of those two with this call adapter factory.

Extensibility (36:50)

These things are pluggable, and that means you can bring your own. The implementation of this is basically a single method. We hand it a type and it either returns null to say no, or an instance of the converter.

SomeJsonResponse

		class ProtoConverterFactory {
  			Converter<?> create(Type type);		null
		}

		class GsonConverterFactory {
			Converter<?> create(Type type);		Converter<?>
		}

So, if I hand it this JSON response type, which does not extend from a proto, it’s just going to say, “I don’t know how to handle this so I return null.”. However, for the GSON converter, it returns an instance to say that it can handle it. That’s why it’s a converter factory, because we ask it to create a converter instance.

This is easy to implement yourself if you want to bring something else. The actual implementation of converter is very similar to what it was previously, although instead of typed input and typed output, we now have OkHttp’s request body and response body.

interface Converter<T> {
  interface Factory {
	Converter <?> create(Type type);
  }

  T fromBody(ResponseBody body);
  RequestBody toBody(T value);
}

This is now more efficient because we can actually do the look up of those adapters. For example, GSON has something called a type adapter and so when we ask the GSON converter factory if it can handle something, it looks up the adapter it just caches that and uses it whenever it’s doing conversion. It’s a tiny win, but getting rid of that whenever you’re making a call is great.

The call adapters have the same patterns. We ask a call adapter factory if it can handle a type, and it behaves the same way (i.e. it will return null to say no). Its API is really simple.

interface CallAdapter<T> {
  interface Factory {
    CallAdapter<?> create(Type type);
  }

  Type responseType(); 
  Object adapt(Call<T> value);
}

We have a method which does the adaptation. It takes in an instance of call and it returns an observable, a single, future, etc. Then there’s also a method to get the response type: when we declare a call of list of contributor, we have no way of pulling that parameterized type out automatically, so we basically just ask the call adapter to also return the response type. Therefore, if you created an instance of this for observable, we could ask it and it would hand back this list of contributor type.

Under Construction (40:05)

Retrofit 2 is under construction! It’s not yet complete, but it’s usable. All the elements I’ve covered are in there, and they work. So what isn’t finished?

We don’t have a story yet for the so-called “parameter handler,” but we want something. We want the ability for you to pass in a multi map from Guava, or a date type, or an enum. (Not that you should be using enums on Android 😉)

We also don’t yet have logging. Logging was something in Retrofit 1, but it’s not there in Retrofit 2. We’ll probably need something here. The nice thing about being dependent on OkHttp is you can actually use an interceptor to do logging of the actual underlying request and response. Consequently, we don’t need it for the raw request and response, but we probably need something that logs the Java types.

If you ever used the mock module, that’s not done yet, but it will be!

Documentation is also extremely lacking at this point.

Finally, I want WebSockets in Retrofit 2, so I’ve been working on WebSockets in OkHttp in my spare time. I really want them in there! It’s probably not going to make it in 2.0, but it’s on my mind for a follow up 2.1.

Release? (41:31)

I committed to Retrofit 2 coming this year, and it will be this year. As for a date, we’re not going to commit to anything. I don’t want to be making the same stupid quote joke at Droidcon New York 2016, so it will be this year! I’m determined to make it. As of August 27, 2015, I have made available a beta of 2.0. You can now put this in your app.

dependencies {
  compile 'com.squareup.retrofit:retrofit:2.0.0-beta1'
  compile 'com.squareup.retrofit:converter-gson:2.0.0-beta1'
  compile 'com.squareup.retrofit:adapter-rxjava:2.0.0-beta1'
}

You can depend on it, it works, and the API is relatively stable. The converters and converter factory methods will probably change, but it’s totally usable. Take a look at it, and if you have something that you don’t like or you take issue with, let me know! Thanks.

Jake Wharton

Jake Wharton is an Android developer at Square working on Square Cash. For the past 5 years he’s been living with a severe allergy to boilerplate code and bad APIs. He speaks at conferences all around the world to educate more about this terrible plague that afflicts many developers.