RStudio |

RStudio provides an open source and enterprise-ready professional software for the R statistical computing environment.

What section would you like to update?
  • Company Logo
  • Company Name & URL
  • Company Description
  • Founded Date
  • Company Status
  • HQ Address
  • Social Links
  • CEO Name & Picture
  • Competitor Set
  • Funding Events
  • Acquisition Events

If you'd like to make any changes to this portfolio we'd be happy to help. Please send info/sources to support@owler.com.


We'll be back up as soon as possible.

Visit owler.uservoice.com for FAQs

RStudio Revenue, Funding, Number of Employees, Competitors and Acquisitions

RSTUDIO CEO

JJ Allaire

Founder & CEO

JJ Allaire

Approval Rating: 74/100

WEIGH-IN

How would you rate JJ Allaire as a CEO?

negative
You
neutral
You
positive
You

Founded:

2009

Headquarters:

BostonMassachusetts

Status:

Private

Industry Sector:

Systems Software

Completeness:

100%

KEY STATS

Revenue

- -

UPDATE THIS

What is RStudio's revenue?

<1M

Employees

- -

UPDATE THIS

How many people work at RStudio?

Followers on Owler

- -

FOLLOW
UNFOLLOW

TOP COMPETITORS

Add a new competitor:

Thanks for your contribution!

Recalculating The Competitive Graph now...

ADD

RStudio was founded in 2009 and its headquarters is located in Boston, Massachusetts, USA. RStudio has $5.0M in revenue and 31 employees. RStudio's top competitors are Birst, SAP and SAS.

RStudio Competitive Set

COMPANYLEADERSHIPCEO SCORE

EMPLOYEES
(ESTIMATED IF PRIVATE)

TOTAL FUNDING

REVENUE
(ESTIMATED IF PRIVATE)

LIKELY OUTCOME
RStudioRStudio ceo

JJ Allaire

Founder & CEO

74/100
1Birst ceoBirst

205$139M$12.8M
2SAP ceoSAP

William R. McDermott

CEO

67/10087,114
$39.2B
3SAS ceoSAS

Jim Goodnight

Co-Founder & CEO

48/10014,175
$3.2B
4GoodData ceoGoodData

Roman Stanek

Founder & CEO

54/100300$96.7M$20M
5Domo ceoDomo

Josh James

Chairman & CEO

66/100810$729.8M$7.3M
6Tableau Software ceoTableau Software

Adam Selipsky

President & CEO

72/1002,100$301.2M$878.4M
7Revolution Analytics ceoRevolution Analytics

126$47.7M$21.5M
8Continuum Analytics ceoContinuum Analytics

Scott Collison

CEO

59/10090$50.3M$3.5M

RStudio Revenue History

company revenue img

Coming soon with Owler Pro!

If you'd like to learn more, please contact us at support@owler.com

$

Community Estimate

UPDATE THIS

RStudio Employee History

company revenue img

Coming soon with Owler Pro!

If you'd like to learn more, please contact us at support@owler.com

Community Estimate

UPDATE THIS

RStudio Leadership

NAMETITLESOCIAL MEDIA
JJ Allaire
JJ Allaire

Founder & CEO

Founder & CEO- -

RStudio News

RStudio Blog Summer internsWe are excited to announce the first formal summer internship program at RStudio. The goal of our internship program is to enable RStudio employees to collaborate with current students to create impactful and useful applications that will help both RStudio users and the broader R community, and help ensure that the community of R developers is representative of the community of R users. You will have the opportunity to work with some of the most influential data scientists and R developers and work on widely used R packages.To be qualified for the internship, you need some existing experience writing code in R and using git + GitHub. To demonstrate these skills, your application needs to include a link to a package, Shiny app, or data analysis repository on GitHub. It's ok if you create it specially for this application; we just want some evidence that you're already familiar with the basic mechanics of collaborative development in R.RStudio is a geographically distributed team which means you can be based anywhere in the USA (next year, we'll try and support international interns too). That means, unless you are based in Boston or Seattle, you will be working 100% remotely, although we will pay for travel to one face-to-face meeting with your mentor. You will meet with your mentor for at least an hour a week, but otherwise you'll be working on your own.ProjectsWe are recruiting interns for the following five projects:Bootstrapping methods: Implement 1) classic bootstrap methods (confidence intervals and other methods) to work with rsample, yardstick, and potentially infer as well as 2) modern bootstrap methods for performance estimation (e.g. 632, 632+ estimates) for rsample.Skills needed: knowledge of bootstrapping methods (e.g. Ch 5 of Davidson and Hinkley) and tidyverse tools and packages. C++ would be advantageous but not required.Mentor: Max Kuhnbroom: broom provides a bunch of methods to turn models into tidy data frames. It's widely used but has lacked developer bandwidth to move it forward. Your job will be to resolve as many pull requests and issues as possible, while thinking about how to re-organise broom for long term maintainability.Skills needed: experience with one or more modelling packages in R; strong communication skillsMentor: David Robinson.ggplot2: ggplot2 is one of the biggest and most used packages in the tidyverse. In this internship you will learn enough about the internals that you can start contributing. You will learn the challenges of working with a large existing codebase, in an environment when any API change is likely to affect existing code.Skills needed: experience creating ggplot2 graphics for data analysis; previous package development experience.Mentor: Hadley WickhamShiny: Shiny lets R programmers quickly create interactive web applications with R. The focus of this internship will be on addressing open issues and working on general user interface improvements. You will learn about how Shiny works, and gain experience working on a project that is at the interface of data analysis and web programming.Skills needed: experience with JavaScript and CSS; some experience creating your own Shiny apps.Mentor: Winston ChangThe Tidies of March: Construct ~30 tidyverse data analysis exercises inspired by the Advent of Code. The main goal is to create an Advent of Code type of experience, but where the exercises cultivate and reward mastery of R, written in an idiomatic tidyverse style.Skills needed: documented experience using the tidyverse to analyze data and an appreciation of coding style/taste. Experience with the R ecosystem for making websites.Mentor: Jenny Bryan.Apply now!The internship pays $USD 6000, lasts 10 weeks, and will start around June 1. Applications close March 12.Apply now!We value diverse viewpoints, and we encourage people with diverse backgrounds and experiences to apply.RStudio Blog
RStudio Blog TensorFlow for ROver the past year we've been hard at work on creating R interfaces to TensorFlow, an open-source machine learning framework from Google. We are excited about TensorFlow for many reasons, not the least of which is its state-of-the-art infrastructure for deep learning applications.In the 2 years since it was initially open-sourced by Google, TensorFlow has rapidly become the framework of choice for both machine learning practitioners and researchers. On Saturday, we formally announced our work on TensorFlow during J.J. Allaire's keynote at rstudio::conf:In the keynote, J.J. describes not only the work we've done on TensorFlow but also discusses deep learning more broadly (what it is, how it works, and where it might be relevant to users of R in the years ahead).New packages and toolsThe R interface to TensorFlow consists of a suite of R packages that provide a variety of interfaces to TensorFlow for different tasks and levels of abstraction, including:keras-A high-level interface for neural networks, with a focus on enabling fast experimentation.tfestimators- Implementations of common model types such as regressors and classifiers.tensorflow-Low-level interface to the TensorFlow computational graph.tfdatasets-Scalable input pipelines for TensorFlow models.Besides the various R interfaces to TensorFlow, there are tools to help with training workflow, including real time feedback on training metrics within the RStudio IDE:The tfruns package provides tools to track, and manage TensorFlow training runs and experiments:Access to GPUsTraining convolutional or recurrent neural networks can be extremely computationally expensive, and benefits significantly from access to a recent high-end NVIDIA GPU. However, most users don't have this sort of hardware available locally. To address this we have provided a number of ways to use GPUs in the cloud, including:The cloudml package, an R interface to Google's hosted machine learning engine.RStudio Server with Tensorflow-GPU for AWS (an Amazon EC2 image preconfigured with NVIDIA CUDA drivers, TensorFlow, the TensorFlow for R interface, as well as RStudio Server).Detailed instructions for setting up an Ubuntu 16.04 cloud desktop with a GPU using the Paperspace service.There is also documentation on setting up a GPU on your local workstation if you already have the required NVIDIA GPU hardware.Learning resourcesWe've also made a significant investment in learning resources, all of these resources are available on the TensorFlow for R website at https://tensorflow.rstudio.com.Some of the learning resources include:Deep Learning with RDeep Learning with R is meant for statisticians, analysts, engineers, and students with a reasonable amount of R experience but no significant knowledge of machine learning and deep learning. You'll learn from more than 30 code examples that include detailed commentary and practical recommendations. You don't need previous experience with machine learning or deep learning: this book covers from scratch all the necessary basics. You don't need an advanced mathematics background, either-high school level mathematics should suffice in order to follow along.Deep Learning with Keras CheatsheetA quick reference guide to the concepts and available functions in the R interface to Keras. Covers the various types of Keras layers, data preprocessing, training workflow, and pre-trained models.GalleryIn-depth examples of using TensorFlow with R, including detailed explanatory narrative as well as coverage of ancillary tasks like data preprocessing and visualization. A great resource for taking the next step after you've learned the basics.ExamplesIntroductory examples of using TensorFlow with R. These examples cover the basics of training models with the keras, tfestimators, and tensorflow packages.What's nextWe'll be continuing to build packages and tools that make using TensorFlow from R easy to learn, productive, and capable of addressing the most challenging problems in the field. We'll also be making an ongoing effort to add to our gallery of in-depth examples. To stay up to date on our latest tools and additions to the gallery, you can subscribe to the TensorFlow for R Blog.While TensorFlow and deep learning have done some impressive things in fields like image classification and speech recognition, its use within other domains like biomedical and time series analysis is more experimental and not yet proven to be of broad benefit. We're excited to how the R community will push the frontiers of what's possible, as well as find entirely new applications. If you are an R user who has been curious about TensorFlow and/or deep learning applications, now is a great time to dive in and learn more!.illustration { border: solid 1px #cccccc; } .nav-image { margin-top: 8px; }RStudio Blog
RStudio Blog sparklyr 0.7We are excited to share that sparklyr 0.7 is now available on CRAN! Sparklyr provides an R interface to Apache Spark. It supports dplyr syntax for working with Spark DataFrames and exposes the full range of machine learning algorithms available in Spark. You can also learn more about Apache Spark and sparklyr in spark.rstudio.com and our new webinar series on Apache Spark. Features in this release:Adds support for ML Pipelines which provide a uniform set of high-level APIs to help create, tune, and deploy machine learning pipelines at scale.Enhances Machine Learning capabilities by supporting the full range of ML algorithms and feature transformers.Improves Data Serialization, specifically by adding support for date columns.Adds support for YARN cluster mode connections.Adds various other improvements as listed in the NEWS file.In this blog post, we highlight Pipelines, new ML functions, and enhanced support for data serialization. To follow along in the examples below, you can upgrade to the latest stable version from CRAN with:install.packages("sparklyr")ML PipelinesThe ML Pipelines API is a high-level interface for building ML workflows in Spark. Pipelines provide a uniform approach to compose feature transformers and ML routines, and are interoperable across the different Spark APIs (R/sparklyr, Scala, and Python.)First, let's go over a quick overview of terminology. A Pipeline consists of a sequence of stages-PipelineStages-that act on some data in order. A PipelineStage can be either a Transformer or an Estimator. A Transformer takes a data frame and returns a transformed data frame, whereas an Estimator take a data frame and returns a Transformer. You can think of an Estimator as an algorithm that can be fit to some data, e.g. the ordinary least squares (OLS) method, and a Transformer as the fitted model, e.g. the linear formula that results from OLS. A Pipeline is itself a PipelineStage and can be an element in another Pipeline. Lastly, a Pipeline is always an Estimator, and its fitted form is called PipelineModel which is a Transformer.Let's look at some examples of creating pipelines. We establish a connection and copy some data to Spark:library(sparklyr) library(dplyr) # If needed, install Spark locally via `spark_install()` sc <- spark_connect(master = "local") iris_tbl <- copy_to(sc, iris) # split the data into train and validation sets iris_data <- iris_tbl %>% sdf_partition(train = 2/3, validation = 1/3, seed = 123)Then, we can create a new Pipeline with ml_pipeline() and add stages to it via the %>% operator. Here we also define a transformer using dplyr transformations using the newly available ft_dplyr_transformer().pipeline <- ml_pipeline(sc) %>% ft_dplyr_transformer( iris_data$train %>% mutate(Sepal_Length = log(Sepal_Length), Sepal_Width = Sepal_Width ^ 2) ) %>% ft_string_indexer("Species", "label") pipeline## Pipeline (Estimator) with 2 stages ## <pipeline_c75757b824f> ## Stages ## |--1 SQLTransformer (Transformer) ## | <dplyr_transformer_c757fa84cca> ## | (Parameters -- Column Names) ## |--2 StringIndexer (Estimator) ## | <string_indexer_c75307cbfec> ## | (Parameters -- Column Names) ## | input_col: Species ## | output_col: label ## | (Parameters) ## | handle_invalid: errorUnder the hood, ft_dplyr_transformer() extracts the SQL statements associated with the input and creates a Spark SQLTransformer, which can then be applied to new datasets with the appropriate columns. We now fit the Pipeline with ml_fit() then transform some data using the resulting PipelineModel with ml_transform().pipeline_model <- pipeline %>% ml_fit(iris_data$train) # pipeline_model is a transformer pipeline_model %>% ml_transform(iris_data$validation) %>% glimpse()## Observations: ?? ## Variables: 6 ## $ Petal_Length <dbl> 1.4, 1.3, 1.3, 1.0, 1.6, 1.9, 3.3, 4.5, 1.6, 1.5,... ## $ Petal_Width <dbl> 0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 1.0, 1.7, 0.2, 0.2,... ## $ Species <chr> "setosa", "setosa", "setosa", "setosa", "setosa",... ## $ Sepal_Length <dbl> 1.482, 1.482, 1.482, 1.526, 1.548, 1.569, 1.589, ... ## $ Sepal_Width <dbl> 8.41, 9.00, 10.24, 12.96, 10.24, 11.56, 5.76, 6.2... ## $ label <dbl> 1, 1, 1, 1, 1, 1, 0, 2, 1, 1, 1, 0, 1, 1, 1, 1, 1...A predictive modeling pipelineNow, let's try to build a classification pipeline on the iris dataset.Spark ML algorithms require that the label column be encoded as numeric and predictor columns be encoded as one vector column. We'll build on the pipeline we created in the previous section, where we have already included a StringIndexer stage to encode the label column.# define stages # vector_assember will concatenate the predictor columns into one vector column vector_assembler <- ft_vector_assembler( sc, input_cols = setdiff(colnames(iris_data$train), "Species"), output_col = "features" ) logistic_regression <- ml_logistic_regression(sc) # obtain the labels from the fitted StringIndexerModel labels <- pipeline_model %>% ml_stage("string_indexer") %>% ml_labels() # IndexToString will convert the predicted numeric values back to class labels index_to_string <- ft_index_to_string(sc, "prediction", "predicted_label", labels = labels) # construct a pipeline with these stages prediction_pipeline <- ml_pipeline( pipeline, # pipeline from previous section vector_assembler, logistic_regression, index_to_string ) # fit to data and make some predictions prediction_model <- prediction_pipeline %>% ml_fit(iris_data$train) predictions <- prediction_model %>% ml_transform(iris_data$validation) predictions %>% select(Species, label:predicted_label) %>% glimpse()## Observations: ?? ## Variables: 7 ## $ Species <chr> "setosa", "setosa", "setosa", "setosa", "setos... ## $ label <dbl> 1, 1, 1, 1, 1, 1, 0, 2, 1, 1, 1, 0, 1, 1, 1, 1... ## $ features <list> [<1.482, 8.410, 1.400, 0.200>, <1.482, 9.000,... ## $ rawPrediction <list> [<-67.48, 2170.98, -2103.49>, <-124.4, 2365.8... ## $ probability <list> [<0, 1, 0>, <0, 1, 0>, <0, 1, 0>, <0, 1, 0>, ... ## $ prediction <dbl> 1, 1, 1, 1, 1, 1, 0, 2, 1, 1, 1, 0, 1, 1, 1, 1... ## $ predicted_label <chr> "setosa", "setosa", "setosa", "setosa", "setos...Model persistenceAnother benefit of pipelines is reusability across programing languages and easy deployment to production. We can save a pipeline from R as follows:ml_save(prediction_model, "path/to/prediction_model")When you call ml_save() on a Pipeline or PipelineModel object, all of the information required to recreate it will be saved to disk. You can then load it in the future to, in the case of a PipelineModel, make predictions or, in the case of a Pipeline, retrain on new data.Machine learningSparklyr 0.7 introduces more than 20 new feature transformation and machine learning functions to include the full set of Spark ML algorithms. We highlight just a couple here.Bisecting K-meansBisecting k-means is a variant of k-means that can sometimes be much faster to train. Here we show how to use ml_bisecting_kmeans() with iris data.library(ggplot2) model <- ml_bisecting_kmeans(iris_tbl, Species ~ Petal_Length + Petal_Width, k = 3, seed = 123) predictions <- ml_predict(model, iris_tbl) %>% collect() %>% mutate(cluster = as.factor(prediction)) ggplot(predictions, aes( x = Petal_Length, y = Petal_Width, color = predictions$cluster) ) + geom_point()Frequent pattern miningml_fpgrowth() enables frequent pattern mining at scale using the FP-Growth algorithm. See the Spark ML documentation for more details. Here we briefly showcase the sparklyr API.# create an item purchase history dataset items <- data.frame(items = c("1,2,5", "1,2,3,5", "1,2"), stringsAsFactors = FALSE) # parse into vector column items_tbl <- copy_to(sc, items) %>% mutate(items = split(items, ",")) # fit the model fp_model <- items_tbl %>% ml_fpgrowth(min_support = 0.5, min_confidence = 0.6) # use the model to predict related items based on # learned association rules fp_model %>% ml_transform(items_tbl) %>% collect() %>% mutate_all(function(x) sapply(x, paste0, collapse = ","))## # A tibble: 3 x 2 ## items prediction ## <chr> <chr> ## 1 1,2,5 "" ## 2 1,2,3,5 "" ## 3 1,2 5Data serializationVarious improvements were made to better support serialization and collection of data frames. Most notably, dates are now supported:copy_to(sc, nycflights13::flights) %>% select(carrier, flight, time_hour)## # Source: lazy query [?? x 3] ## # Database: spark_connection ## carrier flight time_hour ## <chr> <int> <dttm> ## 1 UA 1545 2013-01-01 05:00:00 ## 2 UA 1714 2013-01-01 05:00:00 ## 3 AA 1141 2013-01-01 05:00:00 ## 4 B6 725 2013-01-01 05:00:00 ## 5 DL 461 2013-01-01 06:00:00 ## 6 UA 1696 2013-01-01 05:00:00 ## 7 B6 507 2013-01-01 06:00:00 ## 8 EV 5708 2013-01-01 06:00:00 ## 9 B6 79 2013-01-01 06:00:00 ## 10 AA 301 2013-01-01 06:00:00 ## # ... with more rowsWe can't wait to see what you'll build with the new features! As always, comments, issue reports, and contributions are welcome on the sparklyr GitHub repo.RStudio Blog
RStudio Blog RStudio Connect v1.5.12We're pleased to announce RStudio Connect 1.5.12. This release includes support for viewing historical content, per-application timeout settings, and important improvements and bug fixes.Historical ContentRStudio Connect now retains and displays historical content. By selecting the content's history, viewers can easily navigate, compare, and email prior versions of content. Historical content is especially valuable for scheduled reports. Previously published documents, plots, and custom versions of parameterized reports are also saved. Administrators can control how much history is saved by specifying a maximum age and/or a maximum number of historical versions.Timeout SettingsTimeout settings can be customized for specific Shiny applications or Plumber APIs. These settings allow publishers to optimize timeouts for specific content. For example, a live-updating dashboard might be kept open without expecting user input, while a resource-intensive, interactive app might be more aggressively shut down when idle. Idle Timeout, Initial Timeout, Connection Timeout, and Read Timeout can all be customized.Along with this improvement, be aware of a BREAKING CHANGE. The Applications.ConnectionTimeout and Application.ReadTimeout settings, which specify server default timeouts for all content, have been deprecated in favor of Scheduler.ConnectionTimeout and Scheduler.ReadTimeout.Other ImprovementsA new security option, "Web Sudo Mode", is enabled by default to require users to re-enter their password prior to performing sensitive actions like altering users, altering API keys, and linking RStudio to Connect.The usermanager command line interface (CLI) can be used to update user first and last name, email, and username in addition to user role. User attributes that are managed by your external authentication provider will continue to be managed externally, but the CLI can be used by administrators to complete other fields in user profiles.The Connect dashboard will show administrators and publishers a warning as license expiration nears.BREAKING CHANGE The RequireExternalUsernames option deprecated in 1.5.10 has been removed.Known Issue After installing RStudio Connect 1.5.12, previously deployed content may incorrectly display an error message. Refreshing the browser will fix the error.You can see the full release notes for RStudio Connect 1.5.12 here.Upgrade PlanningThere are no special precautions to be aware of when upgrading from v1.5.10 apart from the breaking changes and known issue listed above and in the release notes. You can expect the installation and startup of v1.5.12 to be complete in under a minute.If you're upgrading from a release older than v1.5.10, be sure to consider the "Upgrade Planning" notes from the intervening releases, as well.If you haven't yet had a chance to download and try RStudio Connect, we encourage you to do so. RStudio Connect is the best way to share all the work that you do in R (Shiny apps, R Markdown documents, plots, dashboards, Plumber APIs, etc.) with collaborators, colleagues, or customers.You can find more details or download a 45-day evaluation of the product at https://www.rstudio.com/products/connect/. Additional resources can be found below.RStudio Connect home page & downloadsRStudio Connect Admin GuideWhat IT needs to know about RStudio ConnectDetailed news and changes between each versionPricingAn online preview of RStudio ConnectRStudio Blog
RStudio Blog Shiny Server (Pro) 1.5.6Shiny Server 1.5.6.875 and Shiny Server Pro 1.5.6.902 are now available.This release of Shiny Server Pro includes floating license support and Shiny Server contains a small enhancement to the way errors are displayed. We recommend upgrading at your earliest convenience.Shiny Server 1.5.6.875Use HTTPS for Google Fonts on error page, which resolves insecure content errors on some browser when run behind SSL. (PR #322)Shiny Server Pro 1.5.6.902This release adds floating license support through the license_type configuration directive. Full documentation can be found at http://docs.rstudio.com/shiny-server/#floating-licenses.Floating licensing allows you to run fully licensed copies of Shiny Server Pro easily in ephemeral instances, such as Docker containers, virtual machines, and EC2 instances. Instances don't have to be individually licensed, and you don't have to manually activate and deactivate a license in each instance. Instead, a lightweight license server distributes temporary licenses ("leases") to each instance, and the instance uses the license only while it's running.This model is perfect for environments in which Shiny Server Pro instances are frequently created and destroyed on demand, and only requires that you purchase a license for the maximum number of concurrent instances you want to run.RStudio Blog
RStudio Blog RStudio Connect v1.5.10We're pleased to announce version 1.5.10 of RStudio Connect and the general availability of RStudio Connect Execution Servers. Execution Servers enable horizontal scaling and high availability for all the content you develop in R. The 1.5.10 release also includes important security improvements and bug fixes.RStudio Connect Execution ServersSupport for high availability and horizontal scaling is now generally available through RStudio Connect Execution Servers. Execution Servers enable RStudio Connect to run across a multi-node cluster.Today, Execution Servers act as identically configured Connect instances. Requests for Shiny applications and Plumber APIs are split across nodes by a load balancer. Scheduled R Markdown execution is distributed across the cluster through an internal job scheduler that distributes work evenly across nodes. Over time, more of Connect's work will be handled by the internal scheduler, giving admins control over what nodes accomplish certain tasks.The admin guide includes configuration instructions. Contact sales for licensing information.Other ImprovementsFor configurations using SQLite, the SQLite database is automatically backed up while Connect is running. By default, three backups are retained and a new backup is taken every 24 hours. To disable, setup [Sqlite].Backup to false in the server configuration file.RStudio Connect has always isolated user code from the file system. For example, application A can not access data uploaded with application B. In 1.5.10, R processes can now read from the /tmp and /var/tmp directories. This change enables shared files to be stored in /tmp and /var/tmp and helps facilitate Kerberos configurations. R processes still have isolated temporary directories provided at runtime and accessible with the tempdir function and TMPDIR environment variable. See section 12 of the admin guide for more details on process sandboxing.Improvements have been made in RStudio Connect and the rsconnect package to support deployments using proxied authentication. See the admin guide for details on setting up the proxy. Anonymous viewers and requests authenticated with API keys are also now supported with proxied auth.Scheduled reports are now re-run if execution is interrupted by a server restart. In a cluster, reports are automatically re-run if a node goes down, assuring high availability for scheduled renderings.AdminEditableUsernames is disabled by default for compatibility with the RequireExternalUsernames flag introduced in 1.5.8. These changes increase security by preventing changes to data supplied by authentication providers.User session expiration is better enforced. All user browser sessions will need to login after the 1.5.10 upgrade.Runtime environments for Shiny R Markdown Documents have changed to support rmarkdown versions 1.7+.You can see the full release notes for RStudio Connect 1.5.10 here.Upgrade Planning There are no special precautions to be aware of when upgrading from 1.5.8 to 1.5.10. Installation and startup should take less than a minute.If you haven't yet had a chance to download and try RStudio Connect we encourage you to do so. RStudio Connect is the best way to share all the work that you do in R (Shiny apps, R Markdown documents, plots, dashboards, Plumber APIs, etc.) with collaborators, colleagues, or customers.You can find more details or download a 45 day evaluation of the product at https://www.rstudio.com/products/connect/. Additional resources can be found below.RStudio Connect Admin GuideDetailed Release NotesPricingOnline preview of RStudio ConnectRStudio Blog
RStudio Blog Birds of a Feather sessions at rstudio::conf 2018 and the rstudio::conf app!RStudio appreciates the hundreds of smart, passionate, data science enthusiasts who have already registered for rstudio::conf 2018. We're looking forward to a fantastic conference, immersing in all things R & RStudio.If you haven't registered yet, please do! Some workshops are now full. We are also over 90% of our registration target - with more than 2 months to go. It's safe to say we will sell out. The sooner you are able to register, the better. It's going to be an amazing time!REGISTER TODAYFor those who have registered, we'd like to help you connect with others doing similar work.Attendees include many kinds of professionals in physical, natural, social and data sciences, statistics, education, engineering, research, BI, IT data infrastructure, finance, marketing, customer support, operations, human resources...and many more. They are sole proprietors and work for the world's largest companies. They live in developing and developed countries. They use R and RStudio to explore data, develop polished reports, publish interactive visualizations, or create production code central to the success of their company. They share a common bond - a commitment to R, enthusiasm for RStudio products, and a desire to become better data scientists.To foster relationships among people doing similar work we've made time and arranged spaces for 9 total Birds of a Feather (BoF) sessions.They will be held during breakfast and lunch, so you can grab a meal and head to your preferred BoF room!Some topics seem obvious to us. For example, we will definitely set aside rooms for Life Sciences, Financial Services, Education, and Training & Consulting Partner BoFs. As we see it, a BoF is just a short unconference session within a conference, organized or left un-organized (mostly) by participants! Topics may be narrow or broad. Some may have agendas and others may be purely for networking. At a minimum, each room will have a friendly RStudio proctor, chairs, a screen to present, and flipcharts for those who are inspired to create discussion sub-groups, share material broadly, or collaborate.What Birds of a Feather sessions would you like to attend?In addition to the 4 BoFs we will set aside rooms for, we're going to use the new community.rstudio.com as a place to discuss which 5 additional BoFs should be allocated time and space. If you are registered for rstudio::conf or planning to register, head on over and look for the rstudio::conf category and the "your ideas for birds of a feather sessions" topic to start proposing and upvoting!Once the BoF session topics are decided, we'll load them into our mobile app for the conference. This, along with community.rstudio.com, will allow for Pre-BoF discussions so you can hit the ground running in San Diego!Download the Conference App Nowrstudio::conf 2018 is the conference for all things R & RStudio. Training Days are on January 31 and February 1. The conference is February 2 and 3.Interested in having your company sponsor rstudio::conf 2018? It's a unique opportunity to share your support for R users. Contact anne@rstudio.com for more information.RStudio Blog
RStudio Blog pool package on CRANThe pool package makes it easier for Shiny developers to connect to databases. Up until now, there wasn't a clearly good way to do this. As a Shiny app author, if you connect to a database globally (outside of the server function), your connection won't be robust because all sessions would share that connection (which could leave most users hanging when one of them is using it, or even all of them if the connection breaks). But if you try to connect each time that you need to make a query (e.g. for every reactive you have), your app becomes a lot slower, as it can take in the order of seconds to establish a new connection. The pool package solves this problem by taking care of when to connect and disconnect, allowing you to write performant code that automatically reconnects to the database only when needed.So, if you are a Shiny app author who needs to connect and interact with databases inside your apps, keep reading because this package was created to make your life easier.What the pool package doesThe pool package adds a new level of abstraction when connecting to a database: instead of directly fetching a connection from the database, you will create an object (called a "pool") with a reference to that database. The pool holds a number of connections to the database. Some of these may be currently in-use and some of these may be idle, waiting for a new query or statement to request them. Each time you make a query, you are querying the pool, rather than the database. Under the hood, the pool will either give you an idle connection that it previously fetched from the database or, if it has no free connections, fetch one and give it to you. You never have to create or close connections directly: the pool knows when it should grow, shrink or keep steady. You only need to close the pool when you're done.Since pool integrates with both DBI and dplyr, there are very few things that will be new to you, if you're already using either of those packages. Essentially, you shouldn't feel the difference, with the exception of creating and closing a "Pool" object (as opposed to connecting and disconnecting a "DBIConnection" object). See this copy-pasteable app that uses pool and dplyr to query a MariaDB database (hosted on AWS) inside a Shiny app.Very briefly, here's how you'd connect to a database, write a table into it using DBI, query it using dplyr, and finally disconnect (you must have DBI, dplyr and pool installed and loaded in order to be able to run this code):conn <- dbConnect(RSQLite::SQLite(), dbname = ":memory:") dbWriteTable(conn, "quakes", quakes) tbl(conn, "quakes") %>% select(-stations) %>% filter(mag >= 6) ## # Source: lazy query [?? x 4] ## # Database: sqlite 3.19.3 [:memory:] ## lat long depth mag ## <dbl> <dbl> <int> <dbl> ## 1 -20.70 169.92 139 6.1 ## 2 -13.64 165.96 50 6.0 ## 3 -15.56 167.62 127 6.4 ## 4 -12.23 167.02 242 6.0 ## 5 -21.59 170.56 165 6.0 dbDisconnect(conn)And here's how you'd do the same using pool:pool <- dbPool(RSQLite::SQLite(), dbname = ":memory:") dbWriteTable(pool, "quakes", quakes) tbl(pool, "quakes") %>% select(-stations) %>% filter(mag >= 6) ## # Source: lazy query [?? x 4] ## # Database: sqlite 3.19.3 [:memory:] ## lat long depth mag ## <dbl> <dbl> <int> <dbl> ## 1 -20.70 169.92 139 6.1 ## 2 -13.64 165.96 50 6.0 ## 3 -15.56 167.62 127 6.4 ## 4 -12.23 167.02 242 6.0 ## 5 -21.59 170.56 165 6.0 poolClose(pool)What problem pool was created to solveAs mentioned before, the goal of the pool package is to abstract away the logic of connection management and the performance cost of fetching a new connection from a remote database. These concerns are especially prominent in interactive contexts, like Shiny apps. (So, while this package is of most practical value to Shiny developers, there is no harm if it is used in other contexts.)The rest of this post elaborates some more on the specific problems of connection management inside of Shiny, and how pool addresses them.The connection management spectrumWhen you're connecting to a database, it's important to manage your connections: when to open them (taking into account that this is a potentially long process for remote databases), how to keep track of them, and when to close them. This is always true, but it becomes especially relevant for Shiny apps, where not following best practices can lead to many slowdowns (from inadvertently opening too many connections) and/or many leaked connections (i.e. forgetting to close connections once you no longer need them). Over time, leaked connections could accumulate and substantially slow down your app, as well as overwhelming the database itself.Oversimplifying a bit, we can think of connection management in Shiny as a spectrum ranging from the extreme of just having one connection per app (potentially serving several sessions of the app) to the extreme of opening (and closing) one connection for each query you make. Neither of these approaches is great: the former is fast, but not robust, and the reverse is true for the latter.In particular, opening only one connection per app makes it fast (because, in the whole app, you only fetch one connection) and your code is kept as simple as possible. However:it cannot handle simultaneous requests (e.g. two sessions open, both querying the database at the same time);if the connection breaks at some point (maybe the database server crashed), you won't get a new connection (you have to exit the app and re-run it);finally, if you are not quite at this extreme, and you use more than one connection per app (but fewer than one connection per query), it can be difficult to keep track of all your connections, since you'll be opening and closing them in potentially very different places.While the other extreme of opening (and closing) one connection for each query you make resolves all of these points, it is terribly slow (each time we need to access the database, we first have to fetch a connection), and you need a lot more (boilerplate) code to connect and disconnect the connection within each reactive/function.If you'd like to see actual code that illustrates these two approaches, check this section of the pool README.The best of both worldsThe pool package was created so that you don't have to worry about this at all. Since pool abstracts away the logic of connection management, for the vast majority of cases, you never have to deal with connections directly. Since the pool "knows" when it should have more connections and how to manage them, you have all the advantages of the second approach (one connection per query), without the disadvantages. You are still using one connection per query, but that connection is always fetched and returned to the pool, rather than getting it from the database directly. This is a whole lot faster and more efficient. Finally, the code is kept just as simple as the code in the first approach (only one connection for the entire app), since you don't have to continuously call dbConnect and dbDisconnect.FeedbackThis package has quietly been around for a year and it's now finally on CRAN, following lots of the changes in the database world (both in DBI and dplyr). All pool-related feedback is welcome. Issues (bugs and features requests) can be posted to the github tracker. Requests for help with code or other questions can be posted to community.rstudio.com/c/shiny, which I check regularly (they can, of course, also be posted to Stack Overflow, but I'm extremely likely to miss it).RStudio Blog
RStudio: RStudio cloud service in the works?An interesting result popped up when I searched for RStudio cloud this morning: RStudio.cloud from RStudio itself."Kick the tires to your hearts delight - but don't plan on taking a cross-country trip just yet," the service warns on its home page.I logged in using my existing shinyapps.io account, which the home page advises to do. If you don't use shinyapps.io, you can set up a new account there, or use Google or GitHub credentials. The opening screen lets you create a project:Screenshot at RStudio.cloud Page to create a project at RStudio.cloud.To read this article in full, please click hereComputerworld
RStudio Blog rstudio::conf(2018) program now available!rstudio::conf 2018, the conference on all things R and RStudio, is only a few months away. Now is the time to claim your spot or grab one of the few remaining seats at Training Days!REGISTER NOWWhether you're already registered or still working on it, we're delighted today to announce the full conference schedule, so that you can plan your days in San Diego. rstudio::conf 2017 takes place January 31-Feb 3 at the Manchester Grand Hyatt, California.This year we have over 60 talks:Keynotes by Dianne Cook "To the Tidyverse and Beyond: Challenges for the Future in Data Science" and JJ Allaire "Machine Learning with R and TensorFlow"14 invited talks from outstanding speakers, innovators, and data scientists.18 contributed talks from the R community on topics like "Branding and automating your work with R Markdown", "Reinforcement learning in Minecraft with CNTK-R", and "Training an army of new data scientists".And 28 talks by RStudio employees on the latest developments in the tidyverse, spark profiling, Shiny, R Markdown, databases, RStudio Connect, and more!We also have 11 two day workshops (for both beginners and experts!) if you want to go deep into a topic. We look forward to seeing you there!REGISTER NOWRStudio Blog

SEE MORE

SEE LESS

RStudio Website History

Screengrabs of how the RStudio site has evloved. (Click to expand)

RStudio website history

Sep 2017

RStudio website history

May 2017

RStudio website history

Feb 2017

RStudio website history

Nov 2016

RStudio website history

Aug 2016

RStudio website history

May 2016

RStudio website history

Feb 2016

RStudio website history

Nov 2015

RStudio website history

Nov 2015

RStudio website history

Jul 2015

RStudio website history

Sep 2014

RStudio website history

Jun 2014

Owler has collected 12 screenshots of RStudio's website since Jun 2014. The latest RStudio website design screenshot was captured in Sep 2017.

RStudio Headquarters

undefined company logo

250 Northern Ave

Boston, Massachusetts 02210

844-448-1212

Driving Directions

google map

RStudio Summary Information

RStudio provides an open source and enterprise-ready professional software for the R statistical computing environment. RStudio was founded in 2009. RStudio's headquarters is located in Boston, Massachusetts, USA 02210. RStudio's Founder & CEO, JJ Allaire, currently has an approval rating of 74%. 100% of the Owler community believes RStudio will IPO. RStudio has an estimated 31 employees and an estimated annual revenue of 5.0M.

RStudio's Founder & CEO, JJ Allaire, currently has an approval rating of 74%. RStudio's primary competitors are  Birst SAP SAS.

Visit the RStudio website to learn more.