WorryFree Computers   »   [go: up one dir, main page]

Doug Stevenson
Doug Stevenson
Developer Advocate
If you've been working with Cloud Functions for Firebase, you've probably wondered how you could speed up the development of your functions. This is possible for HTTPS type functions using the firebase serve in the Firebase CLI, but this wasn't an option for other types of functions. Now, local testing of all of your functions is easy with the Firebase CLI. If you want to try out your code before you deploy it to Cloud Functions, you can do that with the Cloud Functions shell in the Firebase CLI starting at version 3.11.0 or later.

Here's how it works, in a nutshell. We'll use a Realtime Database trigger as an example.

Imagine you have an existing project with a single function in it called makeUppercase. It doesn't have to be deployed yet, just defined in your index.js:

exports.makeUppercase = functions.database.ref('/messages/{pushId}/original').onCreate(event => {
    const original = event.data.val()
    console.log('Uppercasing', event.params.pushId, original)
    const uppercase = original.toUpperCase()
    return event.data.ref.parent.child('uppercase').set(uppercase)
})

This onCreate database trigger runs when a new message is pushed under /messages with a child called original, and writes back to that message a new child called uppercased with the original value capitalized.

Now, if you can kick off the emulator shell from your command line using the Firebase CLI:

$ cd your_project_dir
$ firebase experimental:functions:shell

Then, you'll see something like this:

i  functions: Preparing to emulate functions.
✔  functions: makeUppercase
firebase> 

That firebase prompt is waiting there for you to issue some commands to invoke your makeUppercase function. The documentation for testing database triggers says that you can use the following syntax to invoke the function with incoming data to describe the event:

makeUppercase('foo')

This emulates the trigger of an event that would be generated when a new message object is pushed under /messages that has a child named original with the string value "foo". When you run this command in the shell, it will generate some output at the console like this:

info: User function triggered, starting execution
info: Uppercasing pushId1 foo
info: Execution took 892 ms, user function completed successfully

Notice that the console log in the function is printed, and it shows that the database path wildcard pushId was automatically assigned the value pushId1 for you. Very convenient! But you can still specify the wildcard values yourself, if you prefer:

makeUppercase('foo', {params: {pushId: 'custom_push_id'}})

After emulating this function, if you look inside the database, you should also see the results of the function on display, with /messages/{pushId}/uppercased set to the uppercased string string value "FOO".

You can simulate any database event this way (onCreate, onDelete, onUpdate, onWrite). Be sure to read the docs to learn how to invoke them each correctly.

In addition to database triggers, you can also emulate HTTPS functions, PubSub functions, Analytics functions, Storage functions, and Auth functions, each with their own special syntax.

The Cloud Functions shell is currently an experimental offering, and as such, you may experience some rough edges. If you encounter a problem, please let us know by filing a bug report. You can also talk to other Cloud Functions users on the Firebase Slack in the #functions channel.

Some tips for using the shell

Typing the function invocation each time can be kind of a pain, so be sure to take advantage of the fact that you can navigate and repurpose your invocation history much like you would your shell's command line using the arrow keys.

Also note that the shell is actually a full node REPL that you can use to execute arbitrary JavaScript code and use special REPL commands and keys. This can be useful for scripting some of your test code.

Since you can execute arbitrary code, you can also dynamically load and execute code from other files using the require() function that you're probably already familiar with.

And lastly, if you're like me, and you prefer to use a programmer's editor such as VS Code to write your all JavaScript, you can easily emulate functions by sending code you want to run to the Firebase CLI. This command will run test code from a file redirected through standard input:

$ firebase experimental:functions:shell < tests.js

Happy testing!

Ken Yarmosh (from Savvy Apps)
Ken Yarmosh
Guest post from Savvy Apps CEO & Founder
A newcomer to the ridesharing space, Sprynt is taking a different approach to building its service. They have a 100% electric fleet and rides are 100% free, paid for by local and corporate sponsorships. So when they first contacted our agency Savvy Apps, we were excited about the opportunity to work with them. We knew on the technology side, though, that Sprynt would pose some unique challenges. After considering a few options, we decided to use Firebase to tackle these challenges and create the best experience for riders, drivers, and the Sprynt management team.

Prioritizing real-time communication and queue management

One of the most important components of a ridesharing app is keeping everything synced in real-time. Sprynt needed fast and reliable synchronized rider and driver apps, GPS tracking, and ride-request queue management. That's why one of the first features that attracted us to Firebase for this app was the Realtime Database.

We leveraged Firebase's synchronization solution for its speed, as well as the Realtime Database listeners for keeping the system fast and lightweight. In our experience, Firebase excels when dealing with simple data schemas that need real-time communication between clients and server.

Extending to a complete solution

Besides the core product requirement of real-time communication, Sprynt needed a platform that could support a fully-featured app. For example: authentication for registering and logging in, notifications to help with rider and driver communication, and an easy-to-use dashboard to help the Sprynt team understand and manage their system.

Firebase has all of these components, which made it a leading candidate and our eventual choice. It provides the ability to quickly set up and scale a backend with authentication, push notifications, custom cloud functions, file storage, and analytics. The dashboards and admin tools also allow us to stay focused on building what matters most: a compelling user experience. Simply put, Firebase let Savvy begin a product like Sprynt quickly without compromise.

For authentication, we turned to Firebase Auth because we wanted to take advantage of the new phone authentication added this year at Google I/O. We were able to quickly build an authentication mechanism that allowed for users to sign up via phone numbers. This feature was an important one for Sprynt, since it streamlined the onboarding process. That's especially important when someone might want to get started with Sprynt in a hurry.

When it came to building in notifications, we used Firebase Cloud Messaging. FCM allowed us to send notifications programmatically, such as when a driver is on the way to a rider. Beyond that, FCM gives Sprynt admins the ability to send out quick one-off messages to their user base through the notifications dashboard. We feel that this functionality will prove invaluable for handling services outages, highlighting new specials from advertisers, or other comparable communication regarding the Sprynt service.

Ensuring Sprynt's longevity

Sprynt launched to great success. In the first month of service, they delivered around 5,000 passengers in their pilot service area. The app maintains a 5-star rating and their advertisers are very happy with their results so far.

Sprynt is already pushing hard to keep up with demand from riders and advertisers, as well as the influx of new driver applications. They also have already begun building a steady, repeat ridership base. Google Analytics for Firebase has proven helpful in tracking this kind of usage, as well as version update adoption rates, user device types, and custom events.

We built Sprynt using Firebase for long-term sustainability without constant developer involvement. By leveraging the Firebase console, we made it as easy as possible for Sprynt's team to manage their business, with as little development support as needed. Cloud Storage for Firebase plus Cloud Functions for Firebase allow Sprynt to upload and process updated or new service areas without directly editing the database. These features will become even more important as Sprynt continues to grow in popularity and open new service areas.

A smooth ride

While Firebase Realtime Database has some weaknesses in its query support — particularly around complex queries that include filtering and sorting collections — overall, we've been happy with the platform and its progress.

We've used Firebase since it launched years ago, but we continue to appreciate when the observeSingleEventOfType function on one device responds to an event triggered by another. Watching it happen for the first time between the Sprynt Rider app and Sprynt Driver app still provides that "aha" moment, even today.

Firebase continues to enhance our ability to build and scale new businesses as quickly as possible.

If you want to learn more about using Firebase yourself, check out the use cases section of the website or subscribe to the Firebase channel on YouTube.

Originally posted on the Fabric Blog by Jason St. Pierre, Product Manager

For many years, developers and app teams have relied on Crashlytics to improve their app stability. By now, you're probably familiar with the main parts of the Crashlytics UI; perhaps you even glance at crash-free users, crash-free sessions, and the issues list multiple times a day (you wouldn't be the only one!).

In this post, we want to share 7 pro-tips that will help you get even more value out of Crashlytics, which is now part of the new Fabric dashboard, so you can track, prioritize, and solve issues faster.

1. Speed up your troubleshooting by checking out crash insights

In July, we officially released crash insights out of beta. Crash insights helps you understand your crashes better by giving you more context and clarity on why those crashes occurred. When you see a green lightning bolt appear next to an issue in your issues list, click on it to see potential root causes and troubleshooting resources.

2. Mark resolved issues as "closed" to track regressions

Debugging and troubleshooting crashes is time-consuming, hard work. As developers ourselves, we understand the urge to sign-off and return to more exciting tasks (like building new app features) as soon you resolve a pesky issue - but don't forget to mark this issue as "closed" in Crashlytics! When you formally close out an issue, you get enhanced visibility into that issue's lifecycle through regression detection. Regression detection alerts you when a previously closed issue reoccurs in a new app version, which is a signal that something else may be awry and you should pay close attention to it.

3. Close and lock issues you want to ignore and declutter your issue list

As a general rule of thumb, you should close issues so you can monitor regression. However, you can also close and lock issues that you don't want to be notified about because you're unlikely to fix or prioritize them. These could be low-impact, obscure bugs or issues that are beyond your control because the problem isn't in your code. To keep these issues out of view and declutter your Crashlytics charts, you can close and lock them. By taking advantage of this "ignore functionality", you can fine tune your stability page so only critical information that needs action bubbles up to the top.

4. Use wildcard builds as a shortcut for adding build versions manually

Sometimes, you may have multiple builds of the same version. These build versions start with the same number, but the tail end contains a unique identifier (such as 9.12 (123), 9.12 (124), 9.12 (125), etc). If you want to see crashes for all of these versions, don't manually type them into the search bar. Instead, use a wildcard to group similar versions together much faster. You can do this by simply adding a star (aka. an asterisk) at the end of your version prefix (i.e. 9.12*). For example, if you use APK Splits on Android, a wildcard build will quickly show you crashes for the combined set of builds.

5. Pin your most important builds to keep them front and center

As a developer, you probably deploy a handful of builds each day. As a development team, that number can shoot up to tens or hundreds of builds. The speed and agility with which mobile teams ship is impressive and awesome. But you know what's not awesome? Wasting time having to comb through your numerous builds to find the one (or two, or three, etc.) that matter the most. That's why Crashlytics allows you to "pin" key builds so that they appear at the top of your builds list. Pinned builds allow you to find your most important builds faster and keep them front and center, for as long as you need. Plus, this feature makes it easier to collaborate with your teammates on fixing crashes because pinned builds will automatically appear at the top of their builds list too.

6. Pay attention to velocity alerts to stay informed about critical stability issues

Stability issues can pop up anytime - even when you're away from your workstation. Crashlytics intelligently monitors your builds to check if one issue has caused a statistically significant number of crashes. If so, we'll let you know if you need to ship a hot fix of your app via a velocity alert. Velocity alerts are proactive alerts that appear right in your crash reporting dashboard when an issue suddenly increases in severity or impact. We'll send you an email too, but you should also install the Fabric mobile app, which will send you a push notification so you can stay in the loop even on the go. Keep an eye out for velocity alerts and you'll never miss a critical crash, no matter where you are!

7. Use logs, keys, and non-fatals in the right scenarios

The Crashlytics SDK lets you instrument logs, keys, non-fatals, and custom events, which provide additional information and context on why a crash occurred and what happened leading up to it. However, logs, keys, non-fatals, and custom events are designed to track different things so let's review the right way to use them.

Logs: You should instrument logs to gather important information about user activity before a crash. This could be user behavior (ex. user went to download screen, clicked on download button) to details about the user's action (ex. image downloaded, image downloaded from). Basically, logs are breadcrumbs that show you what happened prior to a crash. When a crash occurs, we take the contents of the log and attach it to the crash to help you debug faster. Here are instructions for instrumenting logs for iOS, Android, and Unity apps.

Keys: Keys are key value pairs, which provide a snapshot of information at one point in time. Unlike logs, which record a timeline of activity, keys record the last known value and change over time. Since keys are overwritten, you should use keys for something that you would only want the last known value for. For example, use keys to track the last level a user completed, the last step a user completed in a wizard, what image the user looked at last, and what the last custom settings configuration was. Keys are also helpful in providing a summary or "roll-up" of information. For instance, if your log shows "login, retry, retry, retry" your key would show "retry count: 3." To set up keys, follow these instructions for iOS, Android, and Unity apps.

Non-fatals: While Crashlytics captures crashes automatically, you can also record non-fatal events. Non-fatal events mean that your app is experiencing an error, but not actually crashing.

For example, a good scenario to log a non-fatal is if your app has deep links, but fails to navigate to them. A broken link isn't something that will necessarily crash your app, but it's something you'd want to track so you can fix the link. A bad scenario to log a non-fatal is if an image fails to load in your app due to a network failure because this isn't actionable or specific.

You should set up non-fatal events for something you want the stack trace for so you can triage and troubleshoot the issue.

If you simply want to count the number of times something happens (and don't need the stack trace), we'd recommend checking out custom events.

These 7 tips will help you get the most out of Crashlytics. If you have other pro-tips that have helped you improve your app stability with Crashlytics, tweet them at us! We can't wait to learn more about how you use Crashlytics.

Get Crashlytics

Ibrahim Ulukaya
Ibrahim Ulukaya
Developer Programs Engineer

We've provided a number of different ways for you to get started building your app with the Firebase platform -- everything from quickstarts for many of our individual products, to codelabs, to some Getting Started screencasts on our YouTube channel.

But what happens after you've gotten started with a feature, and are looking to build something more substantial? How do you learn how to avoid race conditions while writing to the Firebase Database? Or lazily create an infinite feed? Do you wish there were an open-sourced Firebase playbook app that you could use to see real-life use cases in motion? Or an app that demonstrates the use of multiple Firebase products together, so you can follow the same practices in your own app?

For all you developers who want to see an app built for a real life scenario, we've created an open sourced narrative app called FriendlyPix. FriendlyPix uses some of the most popular Firebase SDKs, such as Analytics, Cloud Messaging, Cloud Functions, Authentication (with FirebaseUI), Realtime Database, Storage, Remote Config, Invites, and AdMob.

Best Practices

FriendlyPix highlights some of the best practices when using Firebase, such as:

  • Using FirebaseUI for Auth
  • Creating indexes in the Realtime Database for fast search
  • Fanning out simultaneous writes to avoid race conditions
  • Building a data hierarchy of flat, denormalized data for fast access
  • Running ordered, filtered queries for partial data access
  • Creating lazily updated feeds
  • Using the proper file and folder structure when uploading images to Firebase Storage in conjunctions with Cloud Functions

We look forward to seeing you use these best practices in your app, or use FriendlyPix as a starting point for your app.

Get Started

To get started with FriendlyPix, you can read the design document or check out the apps (Android, iOS, and Web) and associated Cloud Functions on GitHub.

The web version is already hosted at https://friendly-pix.com for you to try out, and we are planning to release FriendlyPix on other platforms for you try as well.

We'll be updating the app and adding further SDKs in the coming weeks, so keep an eye on this blog or watch our Github repos to stay updated.

Questions / Issues / Contribute

You can ask FriendlyPix related questions on StackOverflow with the firebase and friendlypix tags. Issue trackers are hosted on Github in their respective platforms repos: Web, iOS, and Android. We'd love for you to contribute to the project, although before doing so please read our Contributor guide.

Todd Kerpleman
Todd Kerpelman
Developer Advocate

Perhaps you're already familiar with Firebase Dynamic Links -- smart URLs that take the user to any location within your iOS or Android app, even if your user needs to install the app first. Over the last couple of months, the team has made some nice improvements to Dynamic Links, particularly on the iOS side of things, that will make it easier for you to use them in your apps. Let's take a look at what's new!

Better App Preview page

A while back, the Dynamic Links team added an App Preview page for situations where a user clicked on a link and didn't have the app installed on iOS. We added this because some apps -- particularly popular social ones -- tended to ignore the JavaScript redirect that took users to the App Store. So these App Preview pages provided a way to ensure that users still ended up at the App Store, like you intended. It was also a nicer experience for many users, because they were better prepared to see the App Store come up.

That said, our initial page was a little… spartan. Since introducing this page, we've made a few improvements to dress it up with graphics and assets taken either from your app store's listing in the app store, or from preview assets that you can specify directly. We've found this has lead to a significant bump in the number of users who continue to click through to the app store. And it looks better, too.

App Preview pages: Before, the newer default version, and one with custom assets

Of course, if you're still not excited about the idea of having an App Preview page, you're always welcome to remove it. You can do this by adding efr=1 to the dynamic link URL you're generating, checking the "Skip the app preview page" checkbox in the Firebase Console, or using the forcedRedirectEnabled parameter in the iOS and Android builder APIs.

Better error messages -- now with links!

In many cases now, when you encounter error messages in your Dynamic Links implementation, we'll provide you with direct links to our documentation that describe in more detail exactly what these errors mean, and how to fix 'em. Wow! Who knew links could be used as a way to redirect users to more content that's of interest to them? Oh, wait. We did. That's our entire product.

Self-diagnostic tools on iOS

While we're on the subject of making it easier for you to implement Dynamic Links, we've also included self-diagnostic tools with the Dynamic Links library on iOS. By calling DynamicLinks.performDiagnostics(completion: nil) anywhere within your code, the Dynamic Links library can analyze your project and let you know if it detects any common errors with your setup. It also gives you some helpful information that you should send to our troubleshooting team, if you ever need to reach out to them.

More detailed analytics

In the past, when you generated a short Dynamic Link via the console, we were able to tell you how many times per day that link was clicked. While that was nice and all, we've recently boosted our analytics reports to include some more detailed information. Now we can tell you how many times per day a user re-opened your app because they clicked on a Dynamic Link, as well as how many times per day your short Dynamic Link resulted in a user opening up your app for the first time. This holds true both for the analytics you get from the Firebase Console, and also for the analytics you can retrieve using our REST API.

And, as always, if you want to add in utm parameters to your Dynamic Links, Google Analytics for Firebase can make sure it attributes any important conversion events to the Dynamic Link that brought the user to your app in the first place.

Give 'em a try!

All of these changes are on top of a bunch of other improvements we've made to Firebase Dynamic links over the past few months, including:

  • Adding a REST API for retrieving analytics information on your short Dynamic Links, in case you want analytics information but just don't feel like visiting the Firebase Console
  • A link debugging page that shows you, through a pretty fantastic flow chart, exactly what will happen in every situation when a user clicks on a dynamic link
  • Better tools on iOS and Android to build dynamic links on the fly

So if you haven't tried Firebase Dynamic Links lately, this would be a great time to give 'em a try! You can check out all of our documentation to get started, and you can always reach us through our support channels.

Originally posted by By Nathan Welch, Engineering Director/Co-founder, Smash.gg on the Google Cloud Platform Blog

[Editor's note: Smash.gg is an esports platform used by players and organizers worldwide, running nearly 2,000 events per month with 60,000+ competitors, and recently hosted brackets for EVO 2017, the world's largest fighting game tournament. This is its first post in a multi-part series about migrating to Google Cloud Platform (GCP) -- what got them interested in GCP, why they migrated to it, and a few of the benefits they've seen as a result. Stay tuned for future posts that will cover more technical details about migrating specific services.]

Players in online tournaments running on smash.gg need to be able to interact in real time. Both entrants must confirm that they are present, set up the game, and reach a consensus on the results of the match. They also need a simple chat service to resolve any issues with joining or reporting the match, to talk to one another and to tournament moderators.

We built our initial implementation of online match reporting with an off-the-shelf chat service and UI interactions that weren't truly real-time. When the chat service failed in a live tournament, it became clear that we needed a better solution. We looked into building our own using a websocket-based approach, and a few services like PubNub and Firebase. Ultimately, we decided to launch with Firebase because it's widely used, is backed by Google, and is incredibly well-priced.

Two players checking into, setting up, and reporting an online match using the Firebase Realtime Database for real-time interactions.

We got our start with Firebase in May, 2016. Our first release used the Firebase Realtime Database as a kind of real-time cache to keep match data in sync between both entrants. When matches were updated or reported on our backend, we also wrote the updated match data to Firebase. We use React and Flux so we made a wrapper component to listen to Firebase and dispatch updated match data to our Flux stores. Implementing a chat service with Firebase was similarly easy. Using Firechat as inspiration, it took us about a day to build the initial implementation and another day to make it production-ready.

Compared with rolling our own solution, Firebase was an obvious choice given the ease of development and time/financial cost savings. Ultimately, it reduced the load on our servers, simplified our reporting flow, and made the match experience truly real-time. Later that year, we started using Firebase Cloud Messaging (FCM) to send browser push notifications using Cloud Functions triggers as Firebase data changed (e.g., to notify admins of moderator requests). Like the Realtime Database, Cloud Functions was incredibly easy to use and felt magical the first time we used it. Cloud Functions also gave us a window into how well Firebase interacts with Google Cloud Platform (GCP) services like Cloud Pub/Sub and Google BigQuery.

Migrating to GCP

In March of 2017 we attended Google Cloud Next '17 for the Cloud Functions launch. There, we saw that other GCP products had a similar focus on improving the developer experience and lowering development costs. Current products like Pub/Sub, Stackdriver Trace and Logging, and Google Cloud Datastore solved some of our immediate needs. Out of the box, these services gave us things that we were planning to build to supplement products from our existing service provider. And broadly speaking, GCP products seemed to focus on improving core developer workflows to reduce development and maintenance time. After seeing some demos of the products interacting (e.g., Google Container Engine and App Engine with Stackdriver Trace/Logging, Stackdriver with Pub/Sub and BigQuery), we decided to evaluate a full migration.

We started migrating our application in mid May, using the following services: Container Engine, Pub/Sub, Google Cloud SQL, Datastore, BigQuery, and Stackdriver. During the migration, we took the opportunity to re-architect some of our core services and move to Kubernetes. Most of our application was already containerized but had previously been running on a PaaS-like service so Kubernetes was a fairly dramatic shift. While Kubernetes had many benefits (e.g., industry standard, more efficient use of cloud instances, application portability, and immutable infrastructure defined in code), we also lost some top-level application metrics that our previous PaaS service had provided: for instance overall Requests Per Second (RPS), RPS by status, and latency. We were able to easily recreate these graphs from our container logs using log-based metrics and logs export from Stackdriver to BigQuery. You could also do this using other services, but our GCP-only approach was a quick and mostly free way for us to get to parity while experimenting with GCP services.

Request timing and analysis using Stackdriver Trace was another selling point in GCP that we didn't have with our previous service. However, at the time of our migration, the Trace SDK for PHP (our backend services are in PHP, but I promise it's nice PHP!) didn't support asynchronous traces. The Google Cloud SDK for PHP has since added async trace support, but we were able to build async tracing by quickly gluing some GCP services together:

  1. We built a trace reporter to log out traces as JSON.
  2. We then sent the traces to a Pub/Sub topic using Stackdriver log exports.
  3. Finally, we made a Pub/Sub subscriber in Cloud Functions to report the traces using the REST API.

The Google Cloud SDK is certainly a more appropriate solution for tracing in production, but the fact that this combination of services worked well speaks to how easy it is to develop in GCP.

Post-migration results

After running our production environment on GCP for a month, we've saved both time and money. Overall costs are ~10% lower without any Committed Use Discounts, with capacity to spare. Stackdriver logging/monitoring, Container Engine, and Kubernetes have made it easier for our engineers to perform DevOps tasks, leveling up our entire team. And being able to search all our logs in one centralized place allows us to easily cross-reference logs from multiple systems, making it possible to track down root causes of issues much faster. This combined with fully-managed, usage-priced services like Datastore and Firebase means development on GCP is easier and more accessible to all of our engineers. We're really glad we migrated to GCP, and look forward to telling you more about how we did it in future posts. Meanwhile, if you're a developer who loves competitive play and would like to help us build cool things on top of GCP, we'd love to hear from you. We recently closed our Series A from Spark Capital, Accel, and Horizon Ventures, and we're hiring!