Mobile Archives - Go Fish Digital https://gofishdigital.com/blog/category/mobile/ Wed, 30 Aug 2023 20:20:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://gofishdigital.com/wp-content/uploads/2021/09/cropped-gfdicon-color-favicon-1-32x32.png Mobile Archives - Go Fish Digital https://gofishdigital.com/blog/category/mobile/ 32 32 An Automated Assistant Enabled Vehicle https://gofishdigital.com/blog/an-automated-assistant-enabled-vehicle/ https://gofishdigital.com/blog/an-automated-assistant-enabled-vehicle/#respond Tue, 10 May 2022 13:17:30 +0000 https://gofishdigital.com/?p=5241 What will Google’s Automated Assistants be capable of doing tomorrow? Chances are they will be involved in running smart homes and internet of things devices and helping us drive vehicles.  A patent was just granted to Google this week about using an automated assistant to control a vehicle.  This won’t be implemented soon, but it […]

An Automated Assistant Enabled Vehicle is an original blog post first published on Go Fish Digital.

]]>
What will Google’s Automated Assistants be capable of doing tomorrow? Chances are they will be involved in running smart homes and internet of things devices and helping us drive vehicles.  A patent was just granted to Google this week about using an automated assistant to control a vehicle.  This won’t be implemented soon, but it might be something we are driving in in the not too distant future.

An Automated Assistant Controlling a Vehicle in the Future

Humans may engage in human-to-computer dialogs with interactive software applications referred to herein as an “automated assistant.”

I have written a few different posts about Google’s Automated Assistants which interact with humans in a variety of ways.

Here are some previous posts I have written about automated assistants:

I have a speaker device which is an automated assistant.  I use it to perform some searches and listen to music, and send some search results to my phone.  It doesn’t do as many things as helping me drive a vehicle, but this patent may be an illustration of what Google’s automated assistant will be able to do in the future.

Related Content:

Under this patent, humans may provide commands and requests to an automated assistant using spoken natural language input (such as utterances), which may in some cases get converted into text and then processed, and by providing textual (e.g., typed) natural language input.

An Automated assistant can get integrated into a variety of electronic devices, including vehicles. Unlike other computers such as mobile phones, vehicles are generally in motion over a large area, and thus are more susceptible to bandwidth restrictions during communications with an outside server.

automated assistant

This can in part result from the vehicle moving through areas that do not provide adequate network coverage. This can affect automated assistant operations, which may involve many round trips between a vehicle computer and a remote server.

Automated assistants may have access to publicly-available data as well as user-specific data, which can get associated with a personal user account served by the automated assistant. An automated assistant serving many users may have many accounts with different data available for each account.

Commanding Automated Assistant

Thus, if one user makes a request to an automated assistant, and responding to the request involves accessing a second user account, the automated assistant may not be able to complete the request without prompting the second user to log in to their account and repeat the request.

commanding automated assistant

As a result, computational and communication resources, such as network bandwidth and channel usage time, can get consumed by increasing many interactions between the vehicle computer and the server.

Other Users Overriding Restrictions

Implementations described herein relate to limiting vehicle automated assistant responsiveness according to restrictions that get used to determine whether certain input commands and certain users get restricted in certain vehicle contexts. Furthermore, implementations described herein allow for other users to override certain restrictions by providing authorization via an input to the vehicle computer or another computer.

Allowing other users to override such restrictions can preserve computational resources, as less processing resources and network bandwidth would get consumed when a restricted user does not have to rephrase and resubmit certain inputs in a way that would make the inputs permissible.

As an example, a passenger that provides a spoken input to a vehicle automated assistant such as “Assistant, send a message to Karen,” may get denied because the passenger is not the owner of the vehicle or otherwise permitted to access contacts accessible to the vehicle automated assistant.

As a result, the vehicle automated assistant can provide a response such as “I’m sorry, you are not authorized for such commands,” and the passenger would have to rephrase and resubmit the spoken input as, for example, “Ok, Assistant, send a message to 971-555-3141.”

Such a dialog session between the passenger and the vehicle automated assistant can waste computational resources as the later spoken input would have to get converted to audio data, transmitted over a network, and processed.

In a situation where available bandwidth gets limited or variable, such as for example in a moving vehicle, this might be particularly undesirable since the channel over which data gets communicated from the assistant device, over the network, may need to get used for a longer than desirable.

The length of time such a channel gets used might impact not only the operations of the automated assistant but also other software applications which rely on the network to send and receive information.

Such software applications may, for example, be present in the same device as the automated assistant (e.g. other in-vehicle software applications). However, implementations provided herein can eliminate such wasting of computational and communication resources by at least allowing other users to authorize the execution of certain input commands from a user, without requesting the user to re-submit the commands.

Restriction  Of Access To Commands

A vehicle computer and an automated assistant can operate according to different restrictions for restricting access to commands and data that would otherwise be accessible via the vehicle computer and the automated assistant. A restriction can characterize particular commands, data, types of data, and any other inputs and outputs that can get associated with an automated assistant, thereby defining certain information that is available to other users via the automated assistant and the vehicle computer.

When a user provides a spoken utterance corresponding to a particular command characterized by a restriction, the automated assistant can respond according to any restriction that gets associated with the user and the particular command. As an example, when a user provides a spoken utterance that corresponds to data that originated at a computer owned by another user, the spoken utterance can satisfy a criterion for restricting access to such data.

However, in response to receiving the spoken utterance, the automated assistant can determine that the criterion gets satisfied and await authorization from the other user. The authorization can get provided by the other user to the vehicle computer and a separate computer via another spoken utterance and any other input capable of getting received at a computer.

A vehicle that includes the vehicle computer can include an interface, such as a button (e.g., on the steering wheel of the vehicle), that the other user can interact with (e.g., depress the button) in order to indicate authorization to the automated assistant.

In response to the automated assistant receiving authorization from the other user, the automated assistant can proceed with executing the command provided by the user, without necessarily requesting further input from the user.

Automated Assisstant Limiting Acess To Passengers

Another user can limit a passenger from accessing certain data while the other user and the passenger are riding in the vehicle. The other user can limit access to certain data while the vehicle is navigating along a particular route and to a particular destination. Therefore, when the vehicle completes the route and arrives at the particular destination, a restriction on access to the particular data and for the passenger can get released, thereby allowing the passenger to subsequently access such data.

For instance, when the other user is driving the vehicle and the passenger is riding in the vehicle, the passenger can provide a spoken utterance to an automated assistant interface of the vehicle. The spoken utterance can be, “Assistant, call Aunt Lucy.”

Automated Assistant Awaiting Authorization From The User

In response, and because the spoken utterance includes a request that will result in accessing the contact information of the user, the automated assistant can await authorization from the user before fulfilling the request. However, in order to eliminate having to repeatedly authorize or not authorize requests originating from the passenger, the user can provide another spoken utterance such as, “Assistant, do not respond to the passenger for the remainder of this trip.”

In response, the automated assistant can cause restriction data to get generated for limiting access to services (e.g., making phone calls) that would otherwise be available via the automated assistant.

In this way, the user would not have to repeatedly authorize or not authorize the automated assistant to respond to requests from the passenger, thereby eliminating the waste of computational resources and network resources. Furthermore, because the access restrictions can be set to “reset” at the end of a trip, or upon reaching a destination, the user would not have to explicitly request a reset of restrictions, thereby further eliminating the waste of computational resources and network resources.

The user can limit access to certain data to a passenger indefinitely and for an operational lifetime of the vehicle.

For instance, subsequent to the passenger providing the spoken utterance, “Assistant, call Aunt Lucy,” and while the automated assistant is awaiting authorization from the user, the user can provide a separate spoken utterance such as, “Assistant, never respond to the user.”

Automated Assistant Causing Restriction Data To Get Generated

In response, the automated assistant can cause restriction data to get generated (or for an operational lifetime of the vehicle, the vehicle computer, and the automated assistant) limiting access to services that would otherwise be available to a particular user via the automated assistant.

Depending on the occupancy of the vehicle, the automated assistant and the vehicle computer can operate according to an operating model that limits access to the automated assistant and the vehicle computer for certain passengers. As an example, when a user is the only person occupying a vehicle, a vehicle computer and an automated assistant that is accessible via the vehicle computer, can operate according to a first operating mode.

Occupancy Of Vehicle Determined Based On Output Of Sensors Or Operating Modes

The occupancy can get determined based on an output of sensors of the vehicle, the vehicle computer, and any other device that can provide an output from which occupancy can get estimated. The first operating mode can get selected based on the occupancy and can provide the user access to the first set of services, data, and commands, associated with the automated assistant.

When the occupancy gets determined to include more than the user, such as when the user is driving with passengers (e.g., a parent driving with many children as passengers), a second operating mode can get selected. In accordance with the second operating mode, the user can still access the first set of services, data, and commands–however, the passengers would only be able to access the second set of services, data, and commands.

The second set can be different than the first set, and the second set can be a reduced subset relative to the first set. For example, pushing the “talk” button on the head unit, when only a driver (e.g., an unrestricted user) is in the vehicle, can respond with private data without any further authorization.

However, if the “talk” button on the head unit gets pushed when a passenger (e.g., a restricted user) is in the vehicle with the driver, the automated assistant request further authorization to respond to someone (e.g., the passenger) pressing the “talk” button on the head unit.

While the second operating mode (e.g., a shared operating mode) is active, a passenger can attempt to access a service, data, and a command that is exclusively provided in the first set, and not the second set. In order to permit such access, the user (e.g., the driver) can provide inputs to the automated assistant and the vehicle computer, in order to authorize such access.

The user can provide, for example, an input to an interface such as a button and touch display panel, which can get located approximately within reach of a driver of the vehicle (e.g., a button on a steering wheel, a touch display panel integral to a dashboard and console). The authorizing input can get provided in response to the automated assistant soliciting authorization from the user (e.g., “Sorry, I need the authorization to do that . . . [authorizing input received]”).

Alternatively, the automated assistant can bypass soliciting the user for authorization, and, rather, passively wait to respond to a request from a passenger until the user provides an authorizing input.

However, if the user elects to have their automated assistant and their vehicle computer operate according to a third operating mode.

In the third operating mode, in which no option to provide such authorization is available, the automated assistant and the vehicle computer can operate such that the availability of certain operations, data, and services get limited for some passengers (at least relative to a user that is a primary and “master” user with respect to the automated assistant and the vehicle computer).

Automated Assistant Routines

An automated assistant can perform automated assistant routines. An automated assistant routine can correspond to a set and sequence of actions performed and initialized by the automated assistant in response to a user providing a particular input. The user can provide a spoken utterance such as, “Assistant, let’s go to work,” when the user enters their vehicle, in order to cause the automated assistant to perform a “Going to Work” routine.

The “Going to Work” routine can involve the automated assistant causing the vehicle computer to render graphical data corresponding to a daily schedule of the user and render audio data corresponding to a podcast selected by the user.  It can generate a message to a spouse of the user indicating that the user is headed to work (e.g., “Hi Billy, I’m headed to work.”). In some instances, however, a passenger of the vehicle can provide the spoken utterance, “Assistant, let’s go to work.”

Depending on the mode that the vehicle computer and the automated assistant is operating in, the automated assistant can request that the driver, or another authorized user, provide permission to perform actions of a requested routing.

The Automated Assistant “Going to Work” Routine

For example, in response to the passenger invoking the “Going to Work” routine, the automated assistant can initialize performance rendering audio data corresponding to a particular podcast, and also prompt the driver for authorization to initialize other actions of the routine.

Specifically, the vehicle computer and server device can identify actions of the routine that involve accessing restricted data. In this instance, the vehicle computer and the server device can determine that the schedule of the user and the contacts of the user (for sending the message) get restricted data.

As a result, during the performance of the routine, the driver can get prompted times to give permission to execute any actions involving accessing restricted data.

If the driver gives authorization (e.g., via an assistant invocation task), by speaking an invocation phrase (e.g., “Ok, Assistant.”) or interacting with an interface (e.g., pressing a button), the routine can get completed. For instance, the message can get sent to the spouse and the schedule of the driver can get rendered audibly.

However, if authorization is not provided by the driver (e.g., the driver does not perform an assistant invocation task), the automated assistant can bypass the performance of such actions. When the driver does not provide authorization to complete the actions, alternative actions can get provided as options to the passenger.

For instance, instead of audibly rendering the schedule of the driver, the automated assistant can render public information about events that are occurring in the nearby geographic region.

Sending A Message

Instead of sending a message to the spouse of the driver, the automated assistant can prompt the passenger regarding whether they would like to have a message transmitted via their own account (e.g., “Would you like to login, in order to send a message?”). Restrictions on the data of the driver would get enforced while simultaneously providing assistance to a passenger who may be in the vehicle due to, for example, participation in a ride-sharing activity.

The above description gets provided as an overview of some implementations of the present disclosure.

Other implementations may include a system of computers and robots that include processors operable to execute stored instructions to perform a method such as of the methods described above and elsewhere herein.

This Automated Assistant-Enabled Vehicle is Described in this patent:

Modalities for authorizing access when operating an automated assistant enabled vehicle
Inventors: Vikram Aggarwal and Moises Morgenstern Gali
Assignee: GOOGLE LLC
US Patent: 11,318,955
Granted: May 3, 2022
Filed: February 28, 2019

Abstract:

Implementations relate to enabling of authorization of certain automated assistant functions via one or more modalities available within a vehicle.

Implementations can eliminate wasting of computational and communication resources by at least allowing other users to authorize execution of certain input commands from a user, without requesting the user to re-submit the commands.

The vehicle can include a computing device that provides access to restricted data, which can be accessed in order for an action to be performed by the automated assistant.

However, when a restricted user requests that the automated assistant perform an action involving accessing the restricted data, the automated assistant can be authorized or unauthorized to proceed with fulfilling the request via a modality controlled by an unrestricted user.

The unrestricted user can also cause contextual restrictions to be established for limiting functionality of the automated assistant during a trip, for certain types of requests, and/or for certain passengers.

 

Automated Assistant Enhanced Vehicle Conclusion

I have only written about the summary of this patent in this post.  If you want more details about how this automated assistant patent will work, click through to the patent itself for more details about how it could work.  This summary provides some insight into how control over a vehicle would be established using an automated assistant.

At this time Automated Assistants tend to be smaller devices such as smart speakers. Chances are that they will grow to do things such as power vehicles, as shown in this patent.  The interface is different than the one that Google devices tend to use. They are in a more conversational format than a desktop or laptop computer.  I was reminded of Android Auto while reading this post.  I can see Google wanting to have cars controlled by something like android auto or the Automated Assistant.

 

 

 

An Automated Assistant Enabled Vehicle is an original blog post first published on Go Fish Digital.

]]>
https://gofishdigital.com/blog/an-automated-assistant-enabled-vehicle/feed/ 0
Navigation and Transit Communication at Google https://gofishdigital.com/blog/navigation-and-transit-communication-at-google/ https://gofishdigital.com/blog/navigation-and-transit-communication-at-google/#respond Fri, 27 Aug 2021 07:34:09 +0000 https://gofishdigital.com/navigation-and-transit-communication-at-google/ Navigation and Transit Communication at Google I recently wrote a post about Locally Prominent Semantic Features. It focused on Google finding ways to improve the navigation services, which can often benefit commerce by making it easier for consumers to find and shop at and use services at businesses. Related Content: Technical SEO Agency Ecommerce SEO […]

Navigation and Transit Communication at Google is an original blog post first published on Go Fish Digital.

]]>
Navigation and Transit Communication at Google

I recently wrote a post about Locally Prominent Semantic Features. It focused on Google finding ways to improve the navigation services, which can often benefit commerce by making it easier for consumers to find and shop at and use services at businesses.

Related Content:

Closely related to navigation services are transportation services that a search engine such as Google provides information about. I have been writing about Google Transit services on mobile devices since at least 2007. Google Maps is a useful way to navigate to many places using Google’s Location History to help me navigate to those places.

When a Google patent gets granted, that is about Google Applications and Navigation and Transit Communication. I felt I needed to learn more and share what I learned.

This new patent relates to inter-application communications between Navigation and Transit at Google.

Navigation Communication At Google

Today, digital maps of geographic areas get displayed on computers, such as computers, tablets, and mobile phones via mapping applications, web browsers. Many mapping applications provide a searcher with the ability to select the type of map information or features for viewing and adjust the display of a digital map.

Additionally, mapping application providers offer application programming interfaces (APIs) for accessing map and navigation data to display digital maps and provide step-by-step navigation directions to a destination location. A ride service application may invoke a mapping application API to provide a digital map of a geographic area that includes:

  • A pick-up location for the user
  • A destination location
  • Navigation directions for traveling to the destination location
  • Etc.

To provide ride services within a mapping application without directing the user to a separate ride service application, the mapping application invokes ride service APIs to access ride service data from various ride service providers. A person may request navigation directions within the mapping application to a destination location. That user may then select from several modes of transportation for traveling to the destination location, including a ride service mode.

Pick-Up Locations

When the user selects the ride service mode, the mapping application may communicate with ride service applications. It may invoke respective ride service APIs. The map application communicates with the ride service applications and rides service servers to retrieve ride service providers’ types of ride services.

Types of ride services may include:

  • A carpooling ride service, where the ride service providers pick up more passengers on the way to the user’s destination
  • A taxi service that does not pick up more passengers on the way to the user’s destination
  • A limo service that includes more features within the vehicle
  • Extra-large vehicle service for picking up large groups of passengers
  • Etc

The mapping application may also communicate with the ride service applications to:

  • Retrieve price estimates for each type of ride service
  • Monitor wait times for each type of ride service
  • Track ride duration for each type of ride service
  • Record ride status information about the status of the trip (e.g., waiting for the driver to accept the ride, waiting for the driver to arrive at the pick-up location, ride in progress, ride completed)
  • Locate many vehicles within a geographic area surrounding the user’s current location
  • Etc

In some scenarios, ride service applications do not need to get downloaded to the user’s client device. Instead, the mapping application invokes the respective ride service APIs to communicate with ride service servers.

Ride Service Providers

Ride Sharing Service Interface

The user may then select a ride service provider and type of ride service from the mapping application to order transportation services to her destination location. A user may select from several candidate ride service providers within the mapping application without opening each of the corresponding ride service applications for comparison and without leaving the mapping application.

Moreover, a user may identify pick-up locations and destination locations in an application with built-in map functionality. The user may view a three-dimensional street-level view of the area around the pick-up location so that the user may find the driver at the pick-up location.

The mapping application may also provide recommendations on pick-up locations based on the context and location of the user. They can also have walking directions from the user’s current location to the pick-up location.

Destination Location Communication

In particular, an example embodiment of the techniques of the present disclosure is a method in a computer for providing multi-modal travel directions. The method includes receiving, via a user interface, a request to get travel directions to a destination and generating multi-modal travel directions for traveling to the destination.

Generating the multi-modal travel directions includes obtaining, from a third-party provider of a ride service, a sign of a ride to traverse a first segment of the route between a pick-up location and a drop-off location, the ride service defining the first mode of transport, and obtaining navigation directions to traverse a second segment of the route using a second mode of transport different from the first mode. The method further includes providing a sign of the generated multi-modal directions via the user interface.

Another example embodiment is a computer including a user interface, one or more processors, and a non-transitory computer-readable medium storing instructions thereon.

Navigation Directions for Traveling to The Destination Location

When executed by the processors, the instructions cause the computer to receive a request to obtain travel directions to a destination and generate multi-modal travel directions for traveling to the destination. To generate the multi-modal travel directions, the instructions cause the computer to get, from a third-party provider of a ride service, a sign of a ride to traverse a first segment of the route between a pick-up location and a drop-off location.

The ride service would define the first mode of transport and get navigation directions to traverse a second segment of the route using a second mode of transport different from the first mode. The instructions further cause the computer to sign the generated multi-modal directions via the user interface.

Yet another example is a method in a computer for providing multi-modal travel directions. The method includes providing an interactive digital map via a user interface. Also, receiving a request to get travel directions to a destination. As well as obtaining, from a third-party provider of a ride service, a sign of a ride from a pick-up location to a drop-off location to traverse at least a part of the route.

The method further includes receiving, from the third-party provider of the ride service, visualization information for rendering a visualization of the ride on the digital map. As well as generating the visualization of the ride on the digital map by the received visualization information.

Navigation and Transit Communication Between Applications

This can include a computer including a user interface, one or more processors, and a non-transitory computer-readable medium.

When the processors execute, the instructions cause the computer to provide an interactive digital map via a user interface. It can cause them to receive, via the user interface, a request to get travel directions to a destination. They can also get, from a third-party provider of a ride service, a sign of a ride from a pick-up location to a drop-off location to traverse at least a part of the route.

These instructions further cause the computer to receive, from the third-party provider of the ride service, visualization information for rendering a visualization of the ride on the digital map. Then can then generate the visualization of the ride on the digital map by the received visualization information.

Another embodiment is a method in a portable computer for providing ride service information on a digital map.

This method includes:

  • Providing, via a user interface, an interactive digital map of a geographic area
  • Receiving, via the user interface, a request to get travel directions to a destination
  • Requesting from a plurality of third-party providers of ride services, respective indications of candidate rides for at least a part of a route to the destination, each of the indications including a pick-up location, a price estimate, and pick-up time

The Navigation and Transit Communication Method Further Includes:

  • Maintaining the requested indications of the candidate rides
  • Determining a ranking of the candidate rides according to at least one of price and pick-up time
  • Providing, on the digital map, a listing of the candidate rides by the determined ranking
  • Transmitting a request for the selected ride to the corresponding third-party provider

Yet another example is a method in a portable computer for providing map data related to a ride service on a computer. This method includes:

  • Displaying an interactive two-dimensional digital map via a user interface
  • Receiving a request to get travel directions to a destination
  • Obtaining from a third-party provider of a ride service a sign of a ride from a pick-up location to a drop-off location to traverse at least a part of the route
  • Keeping street-level imagery for the pick-up location
  • Transitioning the two-dimensional digital map to an interactive three-dimensional panoramic display of street-level imagery

navigation and transit communications system

Providing street-level imagery related to a ride service in a navigation application
Inventors: Jon Ovrebo Dubielzyk and Scott Ogden
Assignee: GOOGLE LLC
US Patent: 11,099,025
Granted: August 24, 2021
Filed: December 14, 2018

Abstract

An interactive two-dimensional digital map gets provided via a user interface. A request to get travel directions to a destination becomes received. A sign of a ride from a pick-up location to a drop-off location to traverse at least a part of the route becomes obtained from a third-party provider of ride service. Street-level imagery for the pick-up location gets obtained and displayed on the digital map. In response to detecting a selection of the street-level imagery via the user interface, the two-dimensional digital map gets transitioned to an interactive three-dimensional panoramic display of street-level imagery.

Navigation and Transit Communication Overview

Generally speaking, techniques for providing ride services within a mapping application can get implemented in a mapping application operating in a portable computer or a wearable device, one or several network servers, or a system that includes a combination of these devices. But, for clarity, the examples below focus on an embodiment in which a user requests ride services via a mapping application within a portable computer.

The mapping application invokes one or several ride service APIs to communicate with respective ride service applications and ride service servers. The mapping application may also communicate with a map data server and a navigation data server to retrieve map and navigation data for displaying an interactive two-dimensional digital map of a geographic area surrounding the user’s current location and navigation directions to a destination location (also referred to herein as a “drop-off location”) selected by the user.

The mapping application may then display ride service data for one or several ride service providers, including the types of ride services offered by each ride service provider, price estimates for each type of ride service, wait times for each type of ride service, ride duration for each type of ride service, vehicles within a geographic area surrounding the user’s current location, etc.

When the user selects a ride service provider and type of ride service, the mapping application may prompt the user to select a pick-up location. The mapping application provides a default pick-up location near the user’s current location. The user may adjust the pick-up location via user controls. Also, the mapping application may provide a recommended pick-up location based on the user’s current location and context information.

In an area with several one-way streets, the mapping application may recommend a pick-up location at a street that allows drivers to travel in the direction of the destination location so that the driver does not need to make unnecessary turns after picking up the user. In another example, the recommended pick-up location may get determined based on traffic to avoid streets with heavy traffic to cut costs.

In response to selecting the pick-up location, the mapping application may invoke a ride service API corresponding to the selected ride service provider and provide rider identification information for the user, the requested pick-up location, and the type of ride service the corresponding ride service application. The ride service application may then provide a ride identifier, an updated wait time, updated price estimate, updated ride duration, and driver identification information for display on the mapping application via the ride service API. As a result, a driver may pick up the user at the requested pick-up location and drop the user off at the destination location.

Some Example Hardware and Software Components For Navigation and Transit Communication

Navigation and Transit Communication are essential for the operation of this system to work correctly. Because of that. It is helpful to take a good look at the many parts involved in this system.

This patented process includes a portable device configured to execute one or several ride service applications and a mapping application. Besides the client computer, the communication system includes a server device, such as a navigation server device configured to provide a map display and navigation data to the client computer.

The communication system also includes a third-party provider device. It operates independently and separately from the server device. It may also become configured to communicate with the client computer and the server device to provide ride service functionality. The client computer, the server device, and the third-party provider device may get communicatively connected through a network. The network may become a public network, such as the Internet, or a private network, such as an intranet.

The server device can get coupled to a database that stores map data for various geographic areas. The server device can get coupled to a database that stores vehicle data for various vehicles associated with a user of the client computer, vehicles associated with the third-party provider, other vehicles whose data gets collected by the server device, or other servers or combinations of all three.

Communication with Databases That Store Geospatial Information That Assist with Navigation and Transit Communication

More generally, the server device can communicate with one or several databases that store any type of suitable geospatial information or information that can get linked to a geographic context, such as coupons or offers. The server device can also get coupled to a database (not shown) that stores navigation data, including step-by-step navigation directions such as driving, walking, biking, or public transit.

These may get utilized by both the ride service application, the mapping application, or both. The server device may request and receive map data from the map data database and relevant vehicle data from the vehicle data database. The server device may include several connected server devices. The map and vehicle data stored in the databases may become several databases connected in a cloud database configuration.

The client computer could be a smartphone or a tablet computer and includes a memory, one or more processors, a network interface, a user interface (UI), and one or several sensors. The memory can become a non-transitory memory and include one or several suitable memory modules, such as random access memory (RAM), read-only memory (ROM), flash memory, other types of persistent memory, etc. The UI may become a touch screen. More generally, the disclosed techniques can get implemented in other devices, such as laptops or desktop computers, embedded in a vehicle such as a vehicle head unit, wearable devices, smartwatches or smart glasses, etc.

Depending on the implementation, sensors can include a global positioning system (GPS) module to detect the position of the client computer, a compass to determine the direction of the client computer, a gyroscope to determine the rotation and tilt, an accelerometer, etc.

The Memory Behind this Communication System

The memory stores an operating system (OS) – any type of suitable mobile or general-purpose operating system. The OS can include API functions that allow applications such as the ride service and mapping applications. These may interface with each other. They may also retrieve sensor readings. A software application configured to execute on the client computer can include instructions that invoke an OS API for retrieving a current location and orientation of the client computer at that instant. The API can also return a quantitative sign of how certain the API is of the estimate (e.g., as a percentage).

The memory also stores the mapping application, which gets configured to generate interactive digital maps. The mapping application can receive map data in a raster (e.g., bitmap) or non-raster (e.g., vector graphics) format from the map data database and the server device. In some cases, the map data can get organized into layers, such as a basic layer depicting roads, streets, natural formations, etc., a traffic layer depicting current traffic conditions, a weather layer depicting current weather conditions, a navigation layer depicting a path to reach a destination, etc. The mapping application also can display navigation directions from a starting location to a destination location. The navigation directions may include driving, walking, or public transit directions.

The Mapping Application Behind the Navigation and Transit Communication

The mapping application is a standalone application. The mapping application’s functionality can also become provided in the form of an online service accessible via a web browser executing on the client computer, as a plug-in or extension for another software application executing on the client computer, etc. The mapping application generally can get provided in different versions for different respective operating systems. The maker of the client computer can provide a Software Development Kit (SDK), including the mapping application for the Android platform, another SDK for the iOS platform, etc.

The server device includes one or more processors, APIs, a network interface, and a memory. The APIs may provide functions for interfacing with applications that may get stored in the memory 136 on the server device. The memory is tangible or non-transitory memory. It may include any types of suitable memory modules, including random access memory (RAM), read-only memory (ROM), flash memory, other types of persistent memory, etc. The memory stores instructions are executable on the processors, generating map displays to become displayed by the mapping application for a geographic area.

Similarly, the memory, or the memory in another server, can store instructions that generate navigation directions to a geographic location within the geographic area and may get displayed overlaying the map display by the mapping application. In some implementations, the third-party provider may start calls to the server device for navigations directions that may get used by the ride service application on the client computer.

The Server Devices Behind the Navigation and Transit Communication

For simplicity, the illustration shows the server device as only one instance of a server. But, the server device may include a group of one or more server devices, each equipped with one or more processors and capable of operating independently of the other server devices.

Server devices operating in such a group can process requests from the client computer individually (e.g., based on availability), in a distributed manner where one operation associated with processing a request gets performed on one server device. In contrast, another operation associated with processing the same request becomes performed on another server device or according to any suitable technique. For this discussion, the term “server device” may refer to an individual server device or a group of two or more server devices.

The Third Party Provider Devices Behind the Navigation and Transit Communication

The third-party provider device or ride service provider device may include processors, APIs, a network interface, and a memory. The APIs may provide functions for interfacing with applications that may get stored in the memory of the third-party provider. The memory may include any types of suitable memory modules, including random access memory (RAM), read-only memory (ROM), flash memory, other types of persistent memory, etc.

The memory stores instructions executable on the processors, which can generate, handle and send requests for ride service functions in a ride service application, such as the ride service application stored in the client computer’s memory.

The system includes several third-party provider devices correspondings to several different ride service providers. Also, the client computer includes several ride service applications corresponding to each ride service provider in some instances.

In this manner, a user may compare:

  • Ride service types
  • Price estimates
  • Ride durations
  • Estimated wait times for several ride service providers

A Software Architecture On The Client Computer With Protocols For Communicating

The Navigation and Transit Communication could exist between:

  • The operating system
  • The ride service application
  • The mapping application
  • Services on the client computer
  • As well as other applications

The ride service application exposes a ride service API that gets invoked by the mapping application. The mapping application may allow users to request ride services without leaving the mapping application. This mapping application may provide:

  • Pick-up and destination locations to the ride service API
  • Provide the types of ride services in the geographic area
  • Price estimates for each type of ride service
  • Wait times for each type of ride service
  • Ride duration for each type of ride service
  • Many vehicles within the geographic area
  • Etc

In general, the mapping application may make function calls to the ride service application or a ride service server by accessing the ride service API. The API facilitates inter-application communication. It also allows the mapping and rides service applications to maintain control over how processes, logic, and users get handled. It also still exposes functionality to other applications.

The applications can communicate using an inter-process communication (IPC) scheme provided by the operating system. In the client computer, the functionality of the ride service application can become provided as a static library of functions accessible via the ride service API. Some or all functions of the ride service application can execute as part of the mapping application.

More generally, the ride service API provides, to the mapping application, access to a ride service using any suitable software architecture and communication schemes, including those currently known in the art. The ride service API generally can get provided in different versions for different respective operating systems. The maker of the client computer can provide a Software Development Kit (SDK) including the ride service API for the Android platform, another SDK for the iOS platform, etc.

In some instances, the mapping application may communicate with several ride service applications via respective APIs. If the user does not have a ride service application that the mapping application communicates with, the user may become prompted to download the ride service application.

The user does not download the ride service application. The mapping application may communicate via the ride service API with a ride service server, such as the third-party provider device.

A Sequence Diagram With Calls Between a Mapping Application and a Ride Service Application Using APIs

The sequence diagram illustrates an example message sequence chart for one implementation of Navigation and Transit Communication. This diagram for Navigation and Transit Communication can include:

  • A user
  • A mapping application
  • A ride service application
  • A ride service API

In the example sequence diagram, the user requests ride services via user controls on the display presented by the mapping application. The user may request directions to a selected destination location for a ride services mode of transportation. The mapping application may generate an API call for ride services to the ride service application API in response to the request. The API call includes a request for ride services, the user’s current location, and the destination location.

The API call is then sent as a request to a ride service application or a ride service server, such as the third-party provider device.

The ride service application may perform its own internal functions to determine:

  • The types of ride services available to service the user
  • Price estimates for transporting the user to the destination location
  • Wait times for picking up the user
  • Many vehicles within a geographic area surrounding the user’s current location
  • Etc

As part of the navigation and transit communication, the ride service application then prepares a response to get sent to the mapping application with:

  • Types of ride services available
  • An estimated time for the arrival of a ride through each type of ride service
  • An estimated price for each type of ride service
  • An estimation of the vehicles/drivers in the area
  • Combinations thereof

The response gets received by the ride service API and then formatted and provided to the mapping application. It gets handled and manipulated if necessary for a display to the user.

What May Get Displayed of the Navigation and Transit Communication To A System User

The mapping application may display indications of each type of ride service available. These services can include:

  • A carpooling ride service
  • A taxi ride service
  • A limo ride service
  • An extra-large vehicle service

Other information from the navigation and transit communication could include:

  • A price estimate for each type of ride service
  • A ride duration for each type of ride service
  • An estimated wait time for each type of ride service

The mapping application may also display indications of vehicles on the map display in proportion to the number of vehicles within the geographic area, as indicated by the ride service API. While the locations of the vehicles on the map display may not be an accurate representation of the vehicles employed by the ride service provider, the number of vehicles on the map display may get used to showing the user an approximation of the number of vehicles in the area.

When many ride service providers are available, the mapping application may display the vehicles employed by each ride service provider in a different style or color.

Requests From A System User for Navigation and Transit Communication Information

The displayed indications of available types of ride services may include:

  • Selectable user controls for selecting a type of ride service. The user views the displayed indications and selects a type of ride service
  • A user control for selecting a pick-up location (a pin placed at the user’s current location or a nearby the user’s current location and the user may be able to move the pin to another location by entering an address or point of interest
  • Moving the pin to another locatio

The selected type of ride service is then provided to the ride service API and forwarded to the ride service application.

The ride service application then selects a driver for picking up and the user. It transmits driver identification information for the selected driver (e.g., the name of the driver, the vehicle make, model, and color, a license plate number, etc.), an updated price estimate, an updated wait time, a ride ID for retrieving status information indicating that the driver is on the way to pick-up the user, etc. to the ride service API which is then formatted and provided to the mapping application.

The mapping application may present a sign of the driver’s status (e.g., on the way to pick up the user), the updated price estimate, the updated wait time, and the driver identification information to the user.

Transitioning Between User Interfaces During A Ride Service Request Within A Mapping Application

This information reminds me of what I have seen in how Uber and Lyft have been set up. It makes sense that a patent that focuses on navigation and transit communication information would cover such things.

This method can get implemented by a mapping application, a ride service, or any suitable combination.

Showing a Geographic Area Surrounding The User’s Current Location

If you provide navigation and transit communication information for someone, it will make sense to show that for an area near where they are.

Here is what is in this system:

A map display gets presented that includes a geographic area surrounding the user’s current location.

A sign of the user’s current location may also get presented on the map display.

Then, the mapping application presents a search bar for obtaining a geographic search query from a user and providing search results in response to the geographic search query.

The search results may include POIs, addresses, intersections, etc. The user may select one of the search results as a destination location and request directions to the selected destination location.

Selecting Between Different Modes of Transportation

The mapping application may also include user controls for selecting between several modes of transportation, including a ride services mode of transportation. It also provides navigation and transit communication information about a nearby location and those different types of services.

In response to receiving a selection of the ride service mode of transportation, the mapping application may present a ride request display including:

  • Indications of ride service providers
  • Types of ride services from the ride service providers
  • Price estimates for each type of ride service
  • Ride duration for each type of ride service
  • Wait times for each type of ride service
  • Etc

The mapping application may invoke a ride service API for each of one or several ride service applications. It may provide the user’s current location and destination location to each ride service application via the respective APIs.

The Pick Up Request in the Navigation and Transit Communication System

In response to receiving a selection of a ride service provider and type of ride service, the mapping application may present a pick-up request display that includes a user control for selecting a pick-up location. The pick-up request display may include a default pick-up location within a threshold distance of the user’s current location (e.g., 500 feet), where the user adjusts the pick-up location.

The user may enter the pick-up location or drag a pin presented at the default pick-up location to select the pick-up location. The mapping application may provide a recommended pick-up location to save time and money. The recommended pick-up location is 350 feet from the user’s current location. The pick-up request display may state that the user can “Save 3 mins and $2” by selecting the recommended pick-up location. The pick-up request display may also include a user control for confirming the pick-up location, such as a “Confirm Pick-up” button after selecting the pick-up location.

In response to selecting a pick-up location, the mapping application may present a wait for ride display. The wait for ride display may include a sign of the driver’s current location, identification information for the driver, an estimated wait time for the driver to arrive at the selected pick-up location, and user control for contacting the driver.

Once the driver arrives, the user may get transported to the destination location.

When the user requests ride services within the mapping application, the mapping application provides user login information to a ride service provider to log in to a user profile maintained by the ride service provider.

The user profile may include:

  • Payment methods for the user
  • The name of the user
  • An email address of the user
  • A phone number of the user
  • A picture of the user for the driver to identify the user
  • A rating of the user
  • A ride ID for a ride currently in progress or ride the user is requesting
  • Any other suitable user profile information

Once the user confirms a ride request, the mapping application may receive a ride ID for retrieving status information for the ride, such as “Waiting for the driver to accept the ride request,” “Waiting for the driver to arrive at the pick-up location,” “Ride in progress,” and “Ride completed.”

Requesting Ride Services Via The Mapping Application By Invoking The Ride Service API

In the past, Google has provided both Navigation services and has provided navigation and transit communication information. This patent tells us that it may include third-party information from ride-sharing providers. But it feels like it may provide those ride-sharing services or finding a way to make some money from providing information about those ride-sharing services.

The patent includes a state diagram that depicts several states, such as an initial state, a sign-in state, a confirm/book state, a restored state, a ride-in-progress state, and a transition state. At any moment, any of the states may return to the initial state as denoted in the state diagram. It depicts this ride-sharing approach in a great amount of detail.

The Initial Ride Sharing State in the Navigation and Transit Communication System

A user may open a mapping application and begins in the initial state. In the initial state, the mapping application presents a map display of a geographic area. It may receive geographic search queries, provide search results in response to the geographic search queries, and display navigation or travel directions from the user’s current location or some other specified starting location to a selected destination location.

The navigation or travel directions are provided for several different modes of transportation (e.g., walking, biking, driving, public transit, ride services, a recommended mode of transportation that may include multiple modes of transportation for arriving at the destination location based on the shortest duration, distance, or lowest cost, etc.).

When the user selects a ride services mode of transportation or selects multi-modal travel directions that include a segment covered by a ride service and selects a ride service provider/type of ride service, the mapping application proceeds to the sign-in state.

A Sign-In State For this Navigation and Transit Communication System

In the sign-in state, the mapping application determines whether the user gets signed into a client account associated with a provider of the mapping application.

If the user is not signed in, the mapping application may provide user controls for entering user login information, such as a username and password, to sign in to the client account when the user signs in, the mapping application signs the user into a user profile associated with the third-party provider that provides the ride service.

The Confirm/Book State of the Navigation and Transit Communication Information System

The user may sign in to the third-party provider using the client account associated with the provider of the mapping application. When the user gets signed into the third-party provider, the mapping application invokes the ride service API to retrieve a ride ID associated with the user profile to determine whether a ride is currently in progress. Suppose there are a ride currently in progress, the mapping application transitions to the restored state. If there is no ride ID, the mapping application proceeds to the confirm/book state.

In the confirm/book state and, more specifically, the confirm state, the mapping application presents a pick-up request display that includes a user control for selecting a pick-up location, like the display. The pick-up request display may also include user controls for selecting or adding payment methods.

The mapping application may retrieve payment methods for the user stored with the ride service provider via the ride service API. The mapping application may display masked indications of each payment method for the user to choose from and display more user control to enter a new payment method.

A Pick Up Request in the Navigation and Transit Communication System

When the user has selected a pick-up location and payment method, the mapping application may present a user control such as a “Confirm Pick-up” button, which transitions the mapping application to the booking state.

Booking in the Navigation and Transit Communication System

In the booking state, the mapping application requests ride services from the ride service provider from the pick-up location to the destination location via the ride service API. The ride service API then communicates with the ride service provider to select a driver for the ride. The ride service provider may broadcast a message to each driver within a threshold distance of the pick-up location and select the first driver to respond to the broadcasted message.

In any event, the ride service API may then provide a ride ID to the mapping application, and the mapping application proceeds to the ride in progress state. In the ride-in-progress state, the mapping application continuously or periodically (e.g., every 5-10 seconds) calls a get ride status function to receive status information about the ride’s status by providing the ride ID to the ride service API.

The ride service API may provide status information to the mapping application. The status information may include: waiting for the driver to accept the ride, waiting for the driver to arrive at the pick-up location, ride in progress, and ride completed.

waiting for The Driver To Arrive

While waiting for the driver to arrive at the pick-up location and ride in progress states, the ride service API may also return the driver’s current location for display via the mapping application. The mapping application may present a sign of the driver on the map display along with the pick-up location or destination location for the user to view the driver’s progress to the pick-up location or on the route to the destination location.

Additionally, while waiting for the driver to accept the ride, waiting for the driver to arrive at the pick-up location, and ride in progress states, the mapping application may present a user control for canceling the ride. When that gets selected, it may cause the mapping application to provide a cancel request to the ride service provider via the ride service API to cancel the ride.

This mapping application may also present a user control for modifying the destination location, which, when selected, may cause the mapping application to provide a change destination request to the ride service provider via the ride service API.

Dropping the User Off at a Destination

Once the user gets dropped off at the destination location, the mapping application proceeds to the completed state. In the completed state, the mapping application may present:

  • A summary of the ride including a final price of the ride
  • A user control for rating the driver
  • Any other suitable information about the rate

Then the mapping application may return to the initial state.

Returning to the Restored State

As mentioned above, the mapping application transitions to the restored state when the user signs into the third-party provider, and there is a ride currently in progress. The user may have exited the mapping application and then reopened it while requesting a ride. In the restored state, the mapping application proceeds to the ride in progress state and or under every 5-10 seconds, get ride status information about the ride’s status.

Besides providing ride services, the mapping application provides multi-modal modes of transportation for navigating a user to her destination location. The user may select a recommended mode of transportation that may include many modes of transportation for providing an optimal route to the destination location based on the shortest duration, distance, lowest cost, etc.

The user may provide preferences, such as “Avoid highways,” “Use public transit,” “Avoid walking directions at night,” “Lowest cost,” “Shortest duration,” may state:

  • A preferred mode of transportation
  • A preferred ride service provider
  • A preferred ride service type, such as a carpooling ride service
  • Any other suitable preferences

The mapping application may present one or several optimal routes to the destination location using one or several modes of transportation and according to the user’s preferences.

This mapping application provides a request for navigation directions using a recommended mode of transportation to the server device. These can include a starting location, a destination location, and user data, including the user’s preferences.

The server may retrieve map data, navigation data, traffic data, etc., to generate routes from start to destination.

The server device may invoke ride service APIs to retrieve ride service data for ride service providers. These could be estimated wait times and price estimates for particular route segments. An optimal route may include a ride service to and from a public transit stop.

The server device may generate a recommended multi-modal route that includes a first public transit stop one mile from the user’s starting location and a second one mile from the user’s destination location. The recommended multi-modal route may include a ride service from the starting location to the first public transit stop and another ride service from the second public transit stop to the destination location. The recommended multi-modal route may include walking directions from the starting location to the first public transit stop or from the second public transit stop to the destination location.

The server device may identify a ride service provider and ride service type that minimizes cost and wait time by communicating with the ride service providers. When the user indicates a preferred ride service provider or ride type, the server device may retrieve ride service data from the preferred ride service provider.

They can include the preferred ride service provider in the route. The server device may generate one or several recommended multi-modal routes and provide the recommended routes to the mapping application to select and navigate to the destination location.

Other Aspects of the Navigation and Transit Communication System

This system may allow identified routes that include a particular ride service provider and ride service type.

Some ride service providers may include a shuttle ride service type. A route may include taking a train to a stop near a shuttle pick-up location and then taking the ride service from the shuttle pick-up location to a shuttle stop walking distance from the destination location. The user may save time and reduce costs when the shuttle pick-up location is timed with the train stop.

Each of the identified routes gets ranked or scored according to an optimization technique in such a decision. The identified routes may become ranked or scored according to several factors, such as distance, duration, cost, user data, including user preferences.

Ranking Identified Routes to Cut Travel Time

The identified routes may get ranked to cut the time of travel to the destination location. In another example, the identified routes may get ranked to cut the travel price to the destination location.

Each identified route may receive a distance score, a duration score, a cost score, a user preferences score, or any other suitable score. The scores may get weighted, aggregated, or combined in any suitable manner to generate a score for each route.

The routes may then get ranked according to their respective scores to cut cost, time, and distance. Toutes that do not meet the user preferences may get filtered out or receive a score of zero. The recommended routes and the ride service provider/type may become ranked/selected because of user data.

If the user indicates he would not like to walk at night, any routes that include a walking segment after a threshold time may become filtered out or ranked at the bottom. The cost may get determined by using a particular public transit system or ride service provider and ride service type.

The server device may invoke one or several ride-sharing APIs to determine price estimates for using a particular ride service provider and ride service type for a segment of a route.

Besides ranking the identified routes, the server device may rank candidate rides, where each candidate ride corresponds to a particular ride service provider and ride service type.

The candidate rides may become ranked or scored according to factors such as distance, duration, cost, user data, including user preferences.

The candidate rides may get ranked to cut the wait time for the driver to arrive at the pick-up location.

The candidate rides may get ranked to cut the travel price to the destination location. The server device may rank the candidate rides according to wait time, price, or any other suitable category.

The candidate rides may also get ranked according to user feedback data for the ride service providers. The user feedback data may include data show past ratings or reviews of the ride service providers by riders.

The server device provides routes or a listing of rides ranked above a threshold ranking. Such as the top three highest-ranking routes. Those could become recommended routes or rides to the mapping application for the user to choose from.

Navigation and Transit Communication Conclusion

I cut off the end of the patent, which discusses how different transit types may combine. Nothing in the patent says that Google will offer any transit services; however, offering information about these services and different ways to combine them could become useful.

Ridesharing services such as Uber and Lyft have become disruptive ways of traveling. We see driverless cars and companies such as Waymo, which could enter that market. Many people ride Busses. Trains, Subways, and Taxis. It makes sense for Google to explore this navigation and transit space. They may have looked at a lot more than what exists in this patent.
.

Navigation and Transit Communication at Google is an original blog post first published on Go Fish Digital.

]]>
https://gofishdigital.com/blog/navigation-and-transit-communication-at-google/feed/ 0
Relevance-Ordered Categories of Information in Search Results https://gofishdigital.com/blog/relevance-ordered-categories/ https://gofishdigital.com/blog/relevance-ordered-categories/#respond Tue, 29 Sep 2020 18:48:38 +0000 https://gofishdigital.com/relevance-ordered-categories/ Relevance Ordered Categories for Search Results on Mobile Devices? As a reporter for the Daily Planet, Clark Kent most likely searches for news-related information. His Alter Ego and Super Hero, Superman, likely searches for comics-related material such as a kryptonite protection suit. Imagine a search engine that provides relevance-ordered categories of information to searchers in […]

Relevance-Ordered Categories of Information in Search Results is an original blog post first published on Go Fish Digital.

]]>
Relevance Ordered Categories for Search Results on Mobile Devices?

As a reporter for the Daily Planet, Clark Kent most likely searches for news-related information. His Alter Ego and Super Hero, Superman, likely searches for comics-related material such as a kryptonite protection suit.

Imagine a search engine that provides relevance-ordered categories of information to searchers in search results that may use profiles for different searchers. Google has been exploring this concept for at least the last dozen years, having filed an early version of a patent application covering this topic in 2008.

Related Content:

A patent about relevance-ordered categories of information in search results was granted to Google in the past week. It describes how Google may provide search results on mobile devices in relevance-ordered categories.

The patent tells us about what the inventors believe are the expectations of searchers on mobile devices:

They expect to have access on the road, in coffee shops, at home, and in the office through mobile devices to information previously available only from a personal computer that was physically connected to an appropriately provisioned network. They want news, stock quotes, maps, and directions, and weather reports from their cell phones; email from their personal digital assistants (PDAs); up-to-date documents from their smartphones; and timely, accurate search results from all their mobile devices.

The phones of 2020 are very different from the mobile devices of 2008. To get an idea of how Google viewed mobile devices back then, I would recommend this whitepaper from the Google Mobile research team: Computers and iPhones and Mobile Phones, oh my!, A logs-based comparison of search users on different devices, by Maryam Kamvar, Melanie Kellar, Rajan Patel, and Ya Xu.

We don’t know if the process described in this just granted patent is one that Google will pursue, but it has value in how it describes how Google attempted to overcome problems with a search that is perceived when it was first filed. Keep in mind that Google has been working on entity-based indexing using the knowledge graph, and mobile-first indexing to make websites better experiences for searchers on mobile devices since 2008.
.

The newly granted patent tells us that there may be some problems with providing the same rich range of results to people on mobile devices as to people using desktop computers. These problems include:

  1. Input capabilities may be more limited in a mobile device than in a desktop computer, requiring more effort by a searcher to enter a search query (or other information)
  2. Displays in mobile devices tend to be smaller than displays in desktop computers. It may not be possible to display as much information at any given time on a mobile device
  3. Data connections between mobile devices and the Internet may be slower than between desktop computers and the Internet

The solution described in the patent to address those problems involves providing relevance-ordered categories of information to a searcher.

We are told that one or more categories of information (e.g., web information, image information, news information, navigational information, etc.) may be provided to a searcher in response to a query. This drawing from the patent gives us a sense of how Google might provide results using relevance-ordered categories of information in search results:

relevance Ordered information categories

Bringing Relevance-Ordering to Categorized Search Results

The first category of the information shown may be selected based on a prediction of the user’s category of information is likely seeking.

That prediction may be made using rules from a machine learning system, trained using historical search data.

The prediction may be made based on statistical correspondence that has been developed through analysis of other similar queries between the query received and a specific category of information.

The prediction may also be based on a profile corresponding to a searcher profile associated with the query (Clark Kent or Superman). Finally, the prediction could be based on aggregated information across multiple searchers with or without data about a particular user.

The prediction may also be based on a combination of factors, including, for example, the statistical correspondence between the received query and a specific category of information and a user profile associated with the query.

The process behind the patent might work as follows:

  1. Receiving from a mobile device a query
  2. Generating a number of different category-directed result sets for the query
  3. Determining an order for the number of category-directed result sets based on the search
  4. Transmitting the number of category-directed result sets to the mobile device in a specific order

It could also involve formatting the number of category-directed results displayed in a tabbed array to decrease the correlation between each category-directed result set and the query.

Determining that order may involve calculating each of the category-directed results sets a likelihood that representing those particular categories of results set is responsive to the received query.

Calculating the likelihood value can include:

  1. Retrieving a profile associated with the mobile device, including information about the distribution of previously determined correlations between other queries received from that device and one or more different categories of information.
  2. Retrieving data associated with queries received from other mobile devices that are substantially similar to the received search query, including distribution of previously determined correlations between the other substantially similar search queries and one or more different categories of information
  3. The distribution can include multiple sub-distributions, each sub-distribution being related to any one or more of (a) classification of a device from which the query was received, (b) a model or model group of a device from which the query was received, (c) a geographic area from which the query was received, and (d) approximate time of day at which the query was received
  4. Retrieving a profile associated with the remote device and performing a calculation to obtain the first result based on a portion of the retrieved profile, retrieving data that is associated with the query, and performing a second calculation to obtain a second result based on a portion of the retrieved data, and performing a third calculation based on a weighted contribution of the first result and the second result

The search query can be received from a mobile device.

The different categories can include three or more categories selected from the group consisting of:

  • Location-based results
  • Web results
  • Images
  • Video
  • Shopping
  • Blogs
  • Maps
  • Books

In some aspects, the results ranker is configured to order categories based on

a) a profile associated with the remote device and
b) relevance data correlating other search queries received from other remote devices and one or more categories of information, the other search queries are substantially similar to the search request.

This relevance-ordered search results patent can be found at.

Providing relevance-ordered categories of information
Inventors: Yael Shacham, Leland Rechis, Scott Jenson, and Gabriel Wolosin
Assignee: Google LLC
US Patent: 10,783,177
Granted: September 22, 2020
Filed: June 20, 2011
Abstract

A computer-implemented method is disclosed. The method includes receiving from a remote device a search query, generating a plurality of different category-directed result sets for the search query, determining an order for the plurality of category-directed result sets based on the search query, and transmitting the plurality of category-directed result sets to the remote device, in a manner that the result sets are to be displayed in the remote device in the determined order.

Take Aways From this Relevance-Ordered Search Results Patent

When I came across this patent, I was excited to share it because it paints a picture of what Google could have given us. It reminds me of a lot of the Universal search that we used to have on desktop computers that showed us a mix of local results, news results, image results, and video results. I wrote about the many updates to Universal Search in the post Google’s New Universal Search Results.

Google has provided more and more knowledge-based results to even mobile search results, with knowledge cards, featured snippets, PAA questions, entity carousels. The 10 blue links of the early 2000s are very different from the search results of today. So the problems this patent was intended to address on mobile devices may be no longer quite the problems they were back in 2008. Because of initiatives such as mobile-first indexing and entity-based search, we may not benefit from relevance-ordered information in search results that the inventors behind this patent may have thought we would. But there is value in considering other approaches that Google could have taken.

Ideally, your website should be easily indexable by both Googlebot Desktop and Googlebot Smartphone, and it displays well on mobile devices in a mobile friendly manner. And on top of that, if it is responsive to your audiences’ informational and situational needs, it will stand a good chance of ranking for the searches they perform on their mobile devices.

I do question whether Google will adopt its relevance-ordered categories approach to search results. They could, but they may not at this point. If Google comes up with a newer version of this patent that describes how knowledge-based results might fit into search results, then maybe we will see this used. I will keep an eye out for that.

Relevance-Ordered Categories of Information in Search Results is an original blog post first published on Go Fish Digital.

]]>
https://gofishdigital.com/blog/relevance-ordered-categories/feed/ 0
Disambiguating Search Input Based On Context of Input https://gofishdigital.com/blog/disambiguating-search-input/ https://gofishdigital.com/blog/disambiguating-search-input/#respond Mon, 14 May 2018 17:37:05 +0000 https://gofishdigital.com/disambiguating-search-input/ “Hey Google; New York, New York!” Google hears a query for “New York, New York.” Does it give directions, play a Frank Sinatra Song, or show tourist style search results? Likely that depends upon the context of that query. As we are told in a Google patent: User input can be identified as ambiguous for […]

Disambiguating Search Input Based On Context of Input is an original blog post first published on Go Fish Digital.

]]>
“Hey Google; New York, New York!”

Google hears a query for “New York, New York.” Does it give directions, play a Frank Sinatra Song, or show tourist style search results? Likely that depends upon the context of that query.

As we are told in a Google patent:

User input can be identified as ambiguous for a variety of reasons. Generally, user input is identified as being ambiguous if the system interprets it as having more than one likely intended meaning, in the absence of attempts to disambiguate the input using the techniques described here. For instance, in the present example, the user input is identified as being ambiguous based on each of the commands possibly corresponding to the input–the user input “Go To New York, New York” can indicate a geographic location (the city of New York, N.Y.), a song (the song “New York, New York”), and a web page (a tourism web page for the city of New York, N.Y.). The commands can be identified as possibly corresponding to the input using any of a variety of techniques, such as polling an application and/or service corresponding to each command (e.g., querying a music player associated with the command “Go To [Song]” to determine whether “New York, New York” is an accessible song on the mobile computing device), accessing one or more groups of permissible terms for each command (e.g., accessing a group of permissible geographic location terms for the command “Go To [Geographic Location]”), etc.

Disambiguating Search Input based on the context of those Queries

Google has been providing input to search queries to provide unambiguous answers to search queries. This recently granted Google patent looks at the context of queries to try to disambiguate user inputs to make results not ambiguous.

Related Content:

As the patent tells us, this is its purpose:

In the techniques described in this document, the context of a computing device, such as a mobile telephone (e.g., smartphone, or app phone) is taken into consideration to disambiguate ambiguous user inputs. Ambiguous user input is the input that, in the absence of relevant disambiguating information, would be interpreted by the computing device or for the computing device (e.g., by a server system with which the computing device is in electronic communication) as corresponding to more than one query or command. The ambiguous input may be particularly common for spoken input, in part because of the presence of homophones, and in part, because a speech-to-text processor may have difficulty differentiating words that are pronounced differently but sound similar to each other. For example, if a user says “search for sail/sale info” to a mobile computing device, this voice input can be ambiguous as it may correspond to the command “search for sail info” (e.g., information regarding a sail for a sailboat) or to the command “search for sale info” (information regarding a sale of goods). A device might even determine that the input was “search for sell info,” because “sell” and “sale” sound alike, particularly in certain dialects.

How might this search input disambiguation work?

The patent tells us that ambiguous user input may be disambiguated based on a context associated with a mobile computing device (and/or a user of the mobile computing device) separate from the user input itself, such as:

  1. The physical location where the mobile computing device is located (e.g., home, work, car, etc.)
  2. Motion of the mobile computing device (e.g., accelerating, stationary, etc.)
  3. Recent activity on the mobile computing device (e.g., social network activity, emails sent/received, telephone calls made/received, etc.)

Examples of search input being disambiguated based on context can include

1. A device that is docked may determine the type of dock it is in, such as via physical electrical contacts on the dock and device that match each other, or via electronic communication (e.g., via Bluetooth or RFID) between the dock and the device. That could tell it if it is in a context as “in-car” or “at home” based on such a determination. Because of that,

…the device my then disambiguate spoken input such as “directions,” where the term could be interpreted as geographic directions (e.g., driving directions) in an “in-car” context, and how-to directions (e.g., for cooking) in an “at home” mode.

2. In another example, receiving, at a mobile computing device, ambiguous user input that may indicate multiple commands may cause it to determine a current context associated with the mobile computing device that can indicate where the mobile computing device is currently located. That can influence the results provided based on that context.

Advantage of Disambiguating Search Input Based Upon Context

The patent tells us of the advantage of following the processes described in the patent as being:

Permitting users to instruct a mobile computing device to perform the desired task without requiring the user to comply with all of the formalities of providing input for the desired task. As features provided by a mobile computing device have increased, users may be required to provide their input with greater specificity so that the input is properly associated with the intended feature. However, such specificity can be cumbersome and difficult to remember. The described methods, systems, techniques, and mechanisms described in this document can allow a user to provide input using less specificity than formally required for a feature yet still access the intended feature.

The patent is:

Disambiguating input based on context
Inventors: John Nicholas Jitkoff and Michael J. LeBeau
Assignee: Google LLC
US Patent: 9,966,071
Granted: May 8, 2018
Filed: July 1, 2016

Abstract

In one implementation, a computer-implemented method includes receiving, at a mobile computing device, ambiguous user input that indicates more than one of a plurality of commands; and determining a current context associated with the mobile computing device that indicates where the mobile computing device is currently located. The method can further include disambiguating the ambiguous user input by selecting a command from the plurality of commands based on the current context associated with the mobile computing device, and causing output associated with the performance of the selected command to be provided by the mobile computing device.

I discussed with a Google speaker (device) this morning that started with a “Hey Google”, but didn’t require me to say that hot word phrase after Google has made some changes announced at the recent Google I/O conference. I asked for sports scores, and then asked questions about them. I’m still learning how best to interact with my speaker version of Google Now, but it is interesting. (Will saying please when we ask for something be helpful?) My morning conversation came to mind as I started reading this passage from this patent:

This document describes techniques, methods, systems, and mechanisms for disambiguating ambiguous user input on a mobile computing device (e.g., mobile feature telephone, smart telephone (e.g., iPhone, BLACKBERRY), personal digital assistant (PDA), portable media player (e.g., iPod), etc.). As the features provided by mobile computing devices have increased, the number of commands recognized by a mobile computing device can increase as well. For example, each feature on a mobile computing device may register one or more corresponding commands that a user can type, speak, gesture, etc. to cause the feature to be launched on the mobile computing device. However, as the number of recognized commands increases, commands can converge and make it more difficult to distinguish to which of multiple commands user input is intended to correspond. The problem is magnified for voice input. For example, voice input that is provided with loud background noise can be difficult to accurately interpret and, as a result, can map to more than one command recognized by the mobile computing device. For instance, voice input “example” could be interpreted as, among other things, “egg sample,” “example,” or “exam pull.” As another example, the command “go-to” may represent “go to [geographic location]” for a mapping application, and “go to [artist/album/song]” for a media player.

As we are trying to learn how best to interact with our devices and speakers and mobile devices to get the best results from Google, Google is also trying to learn how best to interact with us, and to make sure we are understood when we ask for something. This patent on disambiguating search input takes a few steps in that direction. As it tells us:

Using the techniques described here, in response to receiving ambiguous user input, a current context for the mobile device (and/or a user of the mobile computing device) can be determined and used to disambiguate the ambiguous user input. A current context for a mobile computing device can include a variety of information associated with the mobile computing device and/or a user of the mobile computing device. The context may be external to the device and represent a real-time status around the device, such as a current physical location (e.g., home, work, car, located near wireless network “testnet2010,” etc.), a direction and rate of speed at which the device is traveling (e.g., northbound at 20 miles per hour), a current geographic location (e.g., on the corner of 10th Street and Marquette Avenue), and ambient noise (e.g., low-pitch hum, music, etc.). The context may also be internal to the device, such as upcoming and/or recent calendar appointments (e.g., meeting with John at 2:30 pm on Jul. 29, 2010), a time and date on a clock in the device (e.g., 2:00 pm on Jul. 29, 2010), recent device activity (e.g., emails sent to John regarding the 2:30 meeting), and images from the mobile computing devices camera(s).

I often use my phone to navigate to places and would like to be able to speak to my phone, to make changes to where I am navigating to, such as if I decide to drive past my original destination to go to a different store first, and would like to turn off the navigation to get it to stop telling me to take a U-turn to travel back to that first destination.

This patent is worth spending time going over because it does present some interesting ideas about what might influence how devices might work based on context, as it tells us here:

With the ambiguous user input identified, at step B a current context for the mobile device can be determined. The current context includes information that describes the present state and/or surroundings of the mobile computing device and/or the user of the mobile computing device at the time the input is received. For instance, the current context can include a variety of information related to the mobile computing device and the user, such as information regarding the surrounding physical environment (e.g., available networks, connections to other nearby computing devices, geographic location, weather conditions, nearby businesses, the volume of ambient noise, level of ambient light, the image captured by the mobile device’s camera, etc.), the present state of the mobile computing device (e.g., rate of speed, touchscreen input activated, audio input activated, ringer on/off, etc.), time and date information (e.g., time of day, date, calendar appointments, day of the week, etc.), user activity (e.g., recent user activity, habitual user activity), etc. The current context can be determined by the mobile computing device using data and sensors that are local and/or remote to the mobile computing device.

Change involving Disambiguating Search Input depending upon Context

Once upon a time, when you optimized a page for a query, it was likely a query performed by someone sitting at a desk using a desktop computer or a laptop computer. Now it might be someone in a car or on a bus or train, or in the aisles of a store or at a coffee house. When they search for “New York, New York” it may be because they want traffic directions, or listen to a song, or to read a web page to find out what is happening downtown.

I remember visiting my sister when she went to school in Manhattan, and she suggested that we find out whether any street festivals were going on it the city that day. She picked up the phone and dialed 411, and asked an operator. This was about 5 years before there was a World Wide Web to use to find out, and she did get answers from the operators, which surprised me tremendously. I didn’t expect those answers from that source. I would expect now to be able to find a Web page that could tell me about those, but wouldn’t have expected to be able to find information like that using a computer, or mobile phone, someday in the future. The world is changing.

How prepared are you for changes that mobile devices and search engines will be bringing us?

Disambiguating Search Input Based On Context of Input is an original blog post first published on Go Fish Digital.

]]>
https://gofishdigital.com/blog/disambiguating-search-input/feed/ 0
Expect to See More Mobile Geofences https://gofishdigital.com/blog/mobile-geofences/ https://gofishdigital.com/blog/mobile-geofences/#respond Wed, 06 Dec 2017 20:47:17 +0000 https://gofishdigital.com/mobile-geofences/ How is a Mobile Geofence used? Imagine you drive to a location, and your phone starts receiving notifications about an event happening nearby. It could be a discount at a coffee house. It could be free tours at a local museum. Maybe a beach party with a free lobster and crab fest. It could be […]

Expect to See More Mobile Geofences is an original blog post first published on Go Fish Digital.

]]>
How is a Mobile Geofence used?

Imagine you drive to a location, and your phone starts receiving notifications about an event happening nearby. It could be a discount at a coffee house. It could be free tours at a local museum. Maybe a beach party with a free lobster and crab fest. It could be a local event and offer directions on how to get there. Or maybe, it’s a note from a friend who is having a holiday party, sending along more details.

Related Content:

Location-Based Patents

We’ve seen patents about location-based notifications and advertisements before. With the growth of the use of mobile devices, such as phones and cameras to connect to Apps and the Web, it possibly shouldn’t be a surprise to see more ideas and innovations arise from sources such as Google and Apple. I wrote about an acquisition Google made back in 2011 which took advantage of mobile devices in the post, Google Acquires Virtual Post-it Notes Patents. Will we start seeing location-based alerts and notifications from Google as described in the patents they acquired?

We might begin t0 see ideas start at one company on the Web, and then spread out to other companies, involving such things as geofences.

A mobile geofence is a virtual perimeter you can create to set up a location to send advertisements of notifications to people who enter and stay in that location…

You can also choose a perimeter around a particular building, neighborhood, or event. It’s an idea that is growing these days because of mobile devices connecting to the Web. Apple was granted a patent on setting up geofences this past week. Additionally, Google was granted a patent on setting up geofences in October. You aren’t limited to surrounding your location with a geofence and can choose other locations around competitors, or events. There is a nice introduction to the topic at: 7 Things About Geofencing You’ll Kick Yourself for Not Knowing

It’s worth looking at the Apple patent and the Google Patent, and knowing that this may be something that both sources may offer in the future. Furthermore, Snapchat has been offering Geofilters, to be used in a geofenced area for a while, and you can find out more of those on this page from them: On Demand Geofilters: Submission Guidelines.

Google’s Geofence Patent

Clustering geofence-based alerts for mobile devices
Inventors: Xiaohang Wang, Farhan Shamsi, Yakov Okshtein, David Singleton, Debra Lin Repenning, Lixin Zhang, and Marcus Alexander Foster
Assignee: GOOGLE
US Patent: 9,788,159
Granted: October 10, 2017
Filed: January 31, 2017

Abstract:

A geofence management system obtains location data for points of interest. The geofence management system determines, at the option of the user, the location of a user mobile computing device relative to specific points of interest and alerts the user when the user nears the points of interest. The geofence management system, however, determines relationships among the identified points of interest, and associates or “clusters” the points of interest together based on the determined relationships. Rather than establishing separate geofences for multiple points of interest, and then alerting the user each time the user’s mobile device enters each geofence boundary, the geofence management system establishes a single geofence boundary for the associated points of interest. When the user’s mobile device enters the clustered geofence boundary, the geofence management system notifies the user device to alert the user of the entrance event. The user then receives the clustered, geofence-based alert.

Apple’s Geofence Patent

Content geofencing
Inventors: Thomas Alsina, David T. Wilson, Kenley Sun, Sagar Joshi
Assignee: APPLE INC.
US Patent: 9,826,354
Granted: November 21, 2017
Filed: May 11, 2016

Abstract

Systems, methods, and computer-readable storage media for invitational content geofencing. A system first sends, to a server location data associated with the system, the location data is calculated at the system. The system then receives a listing of places of interest within a geofence including a geographical perimeter for identifying places of interest in the listing, the geofence is based on the location data associated with the system. Next, the system selects a place of interest from the listing based on the location of the system. The system then presents a content item associated with the place of interest.

Mobile Geofences in Action

It’s interesting seeing both Google and Apple mapping out the world virtually, especially like this. Would you consider setting up a geofence to advertise something in a particular location? It’s likely something that we will be seeing more of, as Google and Apple are making the technology available:

Here’s what I’m seeing so far:

Google: Provide contextual experiences when users enter or leave an area of interest
Apple: Location and Maps Programming Guide: Region Monitoring and iBeacon

Expect to See More Mobile Geofences is an original blog post first published on Go Fish Digital.

]]>
https://gofishdigital.com/blog/mobile-geofences/feed/ 0
Google Running Vending Machines on the Cloud? https://gofishdigital.com/blog/google-cloud-vending-machines/ https://gofishdigital.com/blog/google-cloud-vending-machines/#respond Wed, 07 Jun 2017 11:46:42 +0000 https://gofishdigital.com/google-cloud-vending-machines/ Jennifer Slegg, from the SEM Post, pointed me to this page about Google vending machines: Get a free travel item. I found it interesting, as I didn’t know that Google was running vending machines in airports and giving away free stuff at them. It looks something like this: But, I’m questioning whether those vending machines […]

Google Running Vending Machines on the Cloud? is an original blog post first published on Go Fish Digital.

]]>
Google Cloud-based Vending

Jennifer Slegg, from the SEM Post, pointed me to this page about Google vending machines: Get a free travel item. I found it interesting, as I didn’t know that Google was running vending machines in airports and giving away free stuff at them. It looks something like this:

But, I’m questioning whether those vending machines are the impetus behind a new patent from Google, which appears to be aimed at providing more cloud-based services from the search engine.

Related Content:

Shifting Business Models at Google?

For all Google does and different lines of business they have, they still make the vast majority of their money from showing advertisements that accompany search results. They’re continually working to diversify themselves: self-driving cars, business services, hardware, etc.

One of their new approaches involves providing cloud-based services to customers. We see Google working to make their services on the cloud more attractive to potential clients, including new technology to benefit machine learning optimized technology, as we saw described recently with a Google Research Blog post from a couple of weeks ago, titled, Introducing the TensorFlow Research Cloud. To better understand the competition in providing cloud-based services, there are a lot of articles that describe it, such as Who wins the three-way cloud battle? Google vs. Azure vs. AWS.

A patent was granted to Google today at the United States Patent and Trademark Office (USPTO), on a way to run Vending Machines in the Cloud.

I’m not sure how big of an opportunity it might be to patent cloud-based vending machines. I did find an article on the Coca-Cola site from February of 2015, called 16 Things You Didn’t Know About Vending Machines in Japan and Around the World, which tells us that there are more than 6.9 million vending machines in the US at the end of 2010, and more than 3.8 million as the publication of that article. Those are filled with more than just bottles of Coca-Cola. I’m not sure if this cloud-based approach is something that is.

The description for the patent introduces it like this:

According to one general aspect, a computer-implemented method can include receiving, at a computing device, a beacon signal including a vending device identifier and sending, to a cloud-based vending service, the vending device identifier. The method can also include receiving, from the cloud-based vending service, an indication of at least one product available for purchase from the vending device and receiving, at the computing device, an indication of a selected product of the at least one product available for purchase. The method can further include sending, to the cloud-based service, a request to purchase the selected product and receiving, from the cloud-based vending service, a purchase token for the selected product. The method can still further include sending, to the vending device, the purchase token and receiving, from the vending device, an acknowledgment that the purchase token has been used to purchase the selected product.

The technologies that appear to be involved here are things such as Bluetooth beacons and web pages that purchases can be made on (a very different setup than the project Fi vending machines in the video I embedded above.)

I do remember a couple of winters ago stopping at a rest stop in Maryland along Route 95 a little north of the beltway around Washington, DC, and seeing several beverage vending machines that accepted Android pay. At the time I ended up pretty disappointed as I tried to use it but it turned out it wasn’t yet functioning. Since I didn’t have any change or currency on me, using Android pay would have been the only way I could have purchased something to drink. Fortunately, they did have a water fountain.

The patent granted to Google is:

Cloud-based vending
Inventors: Roy Want, Scott Arthur Jenson, William Noah Schilit
Assignee: GOOGLE INC
US Patent: 9,666,013
Granted: May 30, 2017
Filed: September 29, 2015

Abstract

In a general aspect, a computer-implemented method can include receiving, at a computing device, a beacon signal including a vending device identifier and sending, to a cloud-based vending service, the vending device identifier. The method can further include receiving, from the cloud-based vending service, an indication of at least one product available for purchase from the vending device and receiving, at the computing device, an indication of a selected product of the at least one product available for purchase. The method can also include sending, to the cloud-based service, a request to purchase the selected product and receiving, from the cloud-based vending service, a purchase token for the selected product. The method can still further include sending, to the vending device, the purchase token and receiving, from the vending device, an acknowledgment that the purchase token has been used to purchase the selected product.

The patent does tell us that one of the advantages of providing cloud-based vending is:

Such approaches allow a consumer to make a cloud-based vending purchase from such a vending machine without the vending machine having a dedicated Internet or data network connection. Such approaches can be financially advantageous for an operator (owner) of the vending machine, as providing a dedicated Internet (or data network) connection in the vending machine can be cost prohibitive (e.g., due to profit margins of vending machine sales).

It sounds like this would make it possible to offer vending machines in more places, without having to provide the infrastructure necessary to connect a machine to a network. I could see this being convenient.

Added 2017/09/13 – this was interesting to see: Two Ex-Googlers Want To Make Bodegas And Mom-And-Pop Corner Stores Obsolete, which is about a new type of vending machine that is run in conjunction with an app. These vending machines look like they would work well on a cloud-based network. h/t to @dohertyjf

Google Running Vending Machines on the Cloud? is an original blog post first published on Go Fish Digital.

]]>
https://gofishdigital.com/blog/google-cloud-vending-machines/feed/ 0
Google to Use Environmental Information in Queries https://gofishdigital.com/blog/google-to-use-environmental-information/ https://gofishdigital.com/blog/google-to-use-environmental-information/#respond Thu, 23 Feb 2017 00:51:39 +0000 https://gofishdigital.com/google-to-use-environmental-information/ At the beginning of 2015, I wrote about a patent that told us it would influence search results by media that you may have listened to before (radio, television, movies, and so on). That post was about Google Media Consumption History Patent Filed. I was reminded of that media consumption history patent by one about […]

Google to Use Environmental Information in Queries is an original blog post first published on Go Fish Digital.

]]>
Evironmental Information in queries at Google

At the beginning of 2015, I wrote about a patent that told us it would influence search results by media that you may have listened to before (radio, television, movies, and so on). That post was about Google Media Consumption History Patent Filed. I was reminded of that media consumption history patent by one about “environmental information” that was just recently granted at the United States Patent and Trademark Office (USPTO) yesterday. I was also reminded of a patent that described Google Glass being able to recognize songs, which I wrote about in Google Glass to Perform Song Recognition, and Play ‘Name that Tune’?

Related Content:

Imagine watching a movie on TV, and asking your phone, “who is the actor starring in this movie?”

The patent tells us that the process that it follows might involve using “environmental information, such as ambient noise,” to help answer a natural language query. The patent puts this fairly straightforward:

For example, a user may ask a question about a television program that they are viewing, such as “What actor is in this movie?” The user’s mobile device detects the user’s utterance and environmental data, which may include the soundtrack audio of the television program. The mobile computing device encodes the utterance and the environmental data as waveform data, and provides the waveform data to a server-based computing environment.

I am going to try this out using environmental information to see if it can give me an answer.

The patent describes the actual process behind its operation in more technical detail:

The computing environment separates the utterance from the environmental data of the waveform data and then obtains a transcription of the utterance. The computing environment further identifies entity data relating to the environmental data and the utterance, such as by identifying the name of the movie. From the transcription and the entity data, the computing environment can then identify one or more results, for example, results in response to the user’s question. Specifically, one or more results can include an answer to the user’s question of “What actor is in this movie?” (e.g., the name of the actor). The computing environment can provide such results to the user of the mobile computing device.

I will be surprised if this query works, but it also reminds me of a patent that tells me I could stand in front of a local landmark and relying upon location-based information, Google might be able to identify that landmark, as I described in the post: How Google May Interpret Queries Based on Locations and Entities (Tested)

I turned up the audio on my TV and I asked, “Who is the Actor in the movie I am watching?”

Answer:

Google tries to guess

The actor I was asking about was Tom Hanks, and the movie was “Captain Philips.” Google didn’t appear to be able to identify the movie I was watching from the audio soundtrack quite yet.

The environmental information patent is:

Answering questions using environmental context
Inventors: Matthew Sharifi and Gheorghe Postelnicu
Assignee: Google Inc. (Mountain View, CA)
United States Patent 9,576,576
Granted: February 21, 2017
Filed: August 1, 2016

Abstract

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data encoding an utterance and environmental data, obtaining a transcription of the utterance, identifying an entity using the environmental data, submitting a query to a natural language query processing engine, wherein the query includes at least a portion of the transcription and data that identifies the entity, and obtaining one or more results of the query.

Take Aways

The process described in this patent doesn’t seem to be working at this point, but it could be sometime soon. Likewise, the patent that you could use location-based information to identify a landmark doesn’t seem to be working yet either, but both patents take advantage of the sensors built into phones and make that information part of a query. I find myself thinking of Star Trek’s Tricorder with features like using media and location built into queries. I suspect that we will see more useful features from phones as the Internet of Things comes into being, and we will be able to communicate with many devices in the future.

Google to Use Environmental Information in Queries is an original blog post first published on Go Fish Digital.

]]>
https://gofishdigital.com/blog/google-to-use-environmental-information/feed/ 0
How to Get Your Business Started on Snapchat in 4 Steps https://gofishdigital.com/blog/get-your-business-started-on-snapchat/ https://gofishdigital.com/blog/get-your-business-started-on-snapchat/#respond Mon, 24 Oct 2016 21:42:32 +0000 https://gofishdigital.com/get-your-business-started-on-snapchat/ This video gives 4 steps that your business can follow as it gets started on Snapchat. Transcript of Snapchat for Business Hi there! My name is Daniel Russell and welcome to another tech talk. Today we’ll be talking about how you can get your business started on Snapchat. Snapchat is the rising king of social […]

How to Get Your Business Started on Snapchat in 4 Steps is an original blog post first published on Go Fish Digital.

]]>
This video gives 4 steps that your business can follow as it gets started on Snapchat.

Transcript of Snapchat for Business

Hi there! My name is Daniel Russell and welcome to another tech talk. Today we’ll be talking about how you can get your business started on Snapchat.

Snapchat is the rising king of social media. True, Facebook may still be bigger. But if you want to get your brand in front of younger people, ages 18 to 24 and even 24 to 30, Snapchat is a great place to be for your business. But Snapchat is not like other social media platforms and sometimes it can be difficult knowing where to start. These four steps will help you get your business up and running on Snapchat and headed in the right direction.

Setup a public account

First and foremost, set up an account and make it public. Making your account public means that people can follow you and see the things that you post even if you haven’t followed them back. You can make your account public by going to your Snapchat account’s settings and scrolling down to the “Who Can…” section. In the “Who Can…” section, select who can “View My Story” and change the permissions from “My Friends” to “Everyone”. This is important for a couple of reasons. Of course, it boosts your reach and if people come across your snapchat account and add you, you don’t need to worry about adding them back and they can still see your content.

Promote your account on other social platforms

Number two is use any existing social media platforms that you already have set up for your business to promote your new Snapchat account. If you already have a lot of Twitter followers or have a lot of likes on your Facebook page, you can use these platforms to get your new Snapchat account out there in front of your current audience. Besides just putting up your Snapchat username or your Snapchat QR code on these social media platforms, you can also announce contests that are for Snapchat followers only to help drive people to that new platform.

Create a tracking system

Number three, create a tracking system that allows you to keep track of how effective your Snapchat marketing has been. For better or for worse, Snapchat does not have great analytics. Currently, there’s no way to track the number of views that your Snapchat account receives over time. There’s also no real way to track the number of people that have added you over time as well. So, we recommend setting up a spreadsheet or something similar so that you can enter in this data and keep track of it over time. We recommend tracking at least the following three things inside that system you’ve set up. First, is the number of views that your story and content receive. Second, is the number of story completions. Now stories are made up of multiple snaps and sometimes people will only watch your first snap, but maybe not your fifth or sixth snap. By tracking the number of people that view the different snaps in your story, you can get an idea of how effective your Snapchat story is on the whole. Finally you can track the number of screenshots that people take of your content. Because Snapchat doesn’t really have a built-in sharing feature, if people are screenshotting your content that probably means that it’s really good stuff. By tracking these three things in your system over time, you can optimize your Snapchat content for the time of day, for the day of the week, and even for the number of snaps and the length of the snaps inside your stories.

Play the long game

Finally number four, play the long game. Keep in mind that Snapchat is not a quick converting platform. By joining Snapchat, you won’t instantly see a boost in sales. Snapchat content is about creating brand awareness and brand loyalty over the long haul. By staying away from over promotion, keeping true to your brand, and letting people get to know your company on a personal level by going behind the scenes, showing them your employees and even current customers, you can provide subtle reminders about your brand so that when they’re ready to start looking for something to buy, they’ll come to you first.

Now there’s a lot more to Snapchat than just these four steps, and there’s certainly other things you’ll probably want to look into including advertising on Snapchat and deciding whether or not that’s the right track for your brand. But by following these four steps when you set up your Snapchat account, you’ll be in a great position to build that brand loyalty and brand awareness going forward. I hope this has been helpful! Please feel free to contact us if you have any further questions about Snapchat. We love Snapchat and we’re always happy to talk about it. Thank you!

How to Get Your Business Started on Snapchat in 4 Steps is an original blog post first published on Go Fish Digital.

]]>
https://gofishdigital.com/blog/get-your-business-started-on-snapchat/feed/ 0
How to Increase Your App Store Ranking in 4 Easy Steps https://gofishdigital.com/blog/app-store-optimization-steps/ https://gofishdigital.com/blog/app-store-optimization-steps/#respond Thu, 06 Oct 2016 13:06:11 +0000 https://gofishdigital.com/app-store-optimization-steps/ This video provides a 4-step guide to App Store Optimization (ASO) and explains how you can increase your app store ranking and your mobile app downloads with ASO. Transcript on App Store Optimization – increase mobile app downloads with these 4 easy steps Hi, my name is Daniel Russell and welcome to another tech talk. […]

How to Increase Your App Store Ranking in 4 Easy Steps is an original blog post first published on Go Fish Digital.

]]>
This video provides a 4-step guide to App Store Optimization (ASO) and explains how you can increase your app store ranking and your mobile app downloads with ASO.

Transcript on App Store Optimization – increase mobile app downloads with these 4 easy steps

Hi, my name is Daniel Russell and welcome to another tech talk. Today I’ll be going through App Store Optimization.

Just like any search engine, app stores have a search algorithm that determines which apps show up first and which ones show up last. As I’m sure you can imagine, the apps that show up first get a whole lot more downloads than the apps that show up last. If you want to get more downloads, there’s a couple of things that you can do to improve your app’s rankings in the app store. These include modifying your title, the app’s description, improving your app’s reviews and ratings, and increasing your app’s download rate. Now the title and description are fairly similar – it all comes down to keywords and keyword research.

Title

For the title we recommend including your app’s name as well as two or three carefully chosen keywords. These keywords are typically higher in search volume and very accurately describe your app. For example let’s say I have an app called “Pulse” and monitors people’s heart rates. After doing some keyword research, I find that heart rate and heart rate monitor are very frequently searched terms. They also happen to accurately describe my app. So, in the app store, it would make sense to make my title “Pulse”, maybe with a dash after it, heart rate monitor or even heart rate monitor app.

Description

Next up is the description. The description gives you a lot more room for keywords than in the title, but it’s still good to be fairly judicious with your keyword selection in your description as well. Make sure that your app is accurately described. But then also make sure you do a good job selling why people should download it because again download rates will impact the ranking of your app.

Reviews & Ratings

Next up is reviews and ratings, and unfortunately this is an area where you don’t have a ton of control. Apps with higher star ratings and more reviews typically rank better in the app store. One of the best ways to increase the number of reviews for your app is to ask for reviews inside the app. Something that’s a little more difficult is improving the rating. Now besides just improving your app and making it a better user experience, we also recommend making sure that your customer service is properly set up so that if people have issues with your app they can contact you and report a problem rather than going on the app store and writing a long review about the problem and giving you a low star rating.

Download Rate

Finally, the download rate. If an app store sees that almost every person that comes to your app’s page is downloading your app, chances are you’re going to go up in the rankings. Now again besides just making a killer app it can be difficult to increase these download rates on the app store. But there are a few things that you can do to improve your chances. We recommend doing some A/B testing with a couple of different items. These include the app icon, the screenshots of the app that you include, the description, and of course the price. By tracking your ranking in the app store and the number of downloads that you receive over time, you can start to get a pretty good idea of which of these items has the most impact for your app.

I hope this has been helpful and again if you have any questions please feel free to contact us. We’re happy to describe it in detail. Thanks, have a great day!

How to Increase Your App Store Ranking in 4 Easy Steps is an original blog post first published on Go Fish Digital.

]]>
https://gofishdigital.com/blog/app-store-optimization-steps/feed/ 0
Google Glass to Perform Song Recognition, and Play ‘Name that Tune’? https://gofishdigital.com/blog/google-glass-song-recognition/ https://gofishdigital.com/blog/google-glass-song-recognition/#respond Thu, 23 Jun 2016 15:42:45 +0000 https://gofishdigital.com/google-glass-song-recognition/ A More Fun Google Glass? Last week Google was granted a patent that sounds like it might make Google Glass fun, which may have been an element that was missing from them before. Imagine that you hum a tune, or sing a snippet of a song, or whistle a tune, and Google could perform song […]

Google Glass to Perform Song Recognition, and Play ‘Name that Tune’? is an original blog post first published on Go Fish Digital.

]]>
Google glass song recognition

A More Fun Google Glass?

Last week Google was granted a patent that sounds like it might make Google Glass fun, which may have been an element that was missing from them before. Imagine that you hum a tune, or sing a snippet of a song, or whistle a tune, and Google could perform song recognition and tell what the song is. If this works on Google Glass, I suspect that it might come to other devices running Google or Android as well.

Related Content:

The patent is:

Song identification trigger
Inventors: Basheer Tome
Assignee: Google
US Patent 9,367,613
Granted June 14, 2016
Filed: January 30, 2014

Abstract:

The present disclosure provides a wearable computing device. The wearable computing device may include a control system configured to perform functions. The functions may include receiving sensor data from one or more sensors of the wearable computing device. The functions may also include determining whether the sensor data is indicative of humming, singing, or whistling by a wearer. The functions may also include causing the wearable computing device to perform a content recognition of audio content in an ambient environment of the wearable computing device in response to the sensor data being indicative of humming, singing, or whistling by the wearer.

A Little about the Inventor of This Process

The inventor of this song recognition patent, Basheer Tome, has his resume online where he tells us that he worked as a hardware design intern at Google X, where he:

Designed experimental hardware and software input technologies for Google Glass through a range of prototyping methods to inspire and rally people around new concept ideas & applications.

After some time at HP he returned to Google where he is presently a Hardware Engineer working on the Google Daydream controller.

It seems that Google Glass may also recognize that a person is nodding their head or tapping their foot to ambient music that is playing, and it may attempt to identify that song, including the song title, the genre of the song, the artist, and the album title. It may even offer the person wearing the heads-up-display an option to purchase the song from a digital media library. (I could see Google offering an option like that for use with an Android phone, and the patent does hint at that possibility.)

Purchasing or Rating Recognized Songs

Purchase or rate recognized song

The patent tells us that it might attempt to capture a humming profile, a whistling profile, and a singing profile from the wearer of the device to “more accurately track a wearer’s particular hum, sing, or whistle.” It may also capture a “nodding profile” to know when a wearer might nod to accompany the music.

If the device captures humming, singing or whistling, it may record those sounds and use a song database to try to identify the song.

If the device wearer owns the song they are whistling or singing or humming, instead of offering to allow them to purchase the song, it may ask them if they would like to rate the song. This system may also enable the wearer to look at lists of “recently hummed tunes,” “recently sung tunes,” and “recently whistled tunes.”

While it offers to allow a person to purchase or rate those songs, the patent doesn’t say anything about playing those songs for them. It would probably make sense to offer something like that as an option. At the recent Apple Developer’s Conference, there was a presentation about Apple Music and ways that it was being upgraded and one of the things they showed us was that they might show song lyrics to songs people were playing. I could see a lyric display being an option in this process as well. Google Glass may end up becoming a great karaoke device. That would be a new concept idea for Google Glass that could help with its resurrection.

Google Glass to Perform Song Recognition, and Play ‘Name that Tune’? is an original blog post first published on Go Fish Digital.

]]>
https://gofishdigital.com/blog/google-glass-song-recognition/feed/ 0