Pastebin grabber (email:pass) Tutorial


#Pastebin Grabber #1
####First Words:
This is my first tutorial ever ! So if it feel understandable or something, please tell me :wink:

####Before starting:
For this tutorial i will use Ruby language
you will need those gems to follow the tutorial :
gem install HTTParty --> for http request
gem install Nokogiri --> for web parsing

####How it works :

  1. Make a search on pastebin website
  2. Parse all google links to only get pastebin links
  3. Create threads for inspecting links
  4. Inspect links and get the combos in a file

Make a search on pastebin site :

We just use the google operator “site:” to get only links from followed by our query :

" combo email pass "

page = HTTParty.get('')

page_parsed = Nokogiri::HTML(page)  #Get a "code readable page"

More informations on google operator here -> Google operators

Get all links

Just use CSS selectors on our parsed page to get all links.

link_array = []

page_parsed.css('.r a').map do |a|    #for each links (CSS selectors)
  link_array.push(a["href"][7..-1])   #Get links in an array

##Create Threads :
We are going to create threads in order to get quicker results by creating an array of threads.
For each links we got we create a thread and put it on the array.

threads = []
link_array.each do |item|
  threads << {explore(item)} #Explore funtion in the last part ;)

threads.each { |t| t.join }

##Get Results \o/ !

Each thread will :

  • Get the page and parse it
  • Get the raw data of pastebin
  • Compare data to regex
  • Write result in a file
def explore(link)
  reg = /.*@[a-z]*\.[a-z]{2,3}:\S*/ #email:pass regex
  vide = "" #Buffer variable
   page = HTTParty.get(link)
   page_parsed = Nokogiri::HTML(page)
   page_parsed.css('#paste_code').map do |a|   #For each line of raw data 
     a.text.scan(reg).each do |combo|
       vide += combo + "\n"    #if data match regex, write into buffer
     end'combo.txt', 'w') { |file| file.write(vide) }  #write buffer to a file


That’s it, you just create your own Pastebin grabber ! Here come the full code :

require 'HTTParty'
require 'Nokogiri'

def explore(link)
  reg = /.*@[a-z]*\.[a-z]{2,3}:\S*/
  vide = ""
   page = HTTParty.get(link)
   page_parsed = Nokogiri::HTML(page)
   page_parsed.css('#paste_code').map do |a|
     a.text.scan(reg).each do |combo|
       vide += combo + "\n"
     end'combo.txt', 'w') { |file| file.write(vide) }



page = HTTParty.get('')

page_parsed = Nokogiri::HTML(page)

link_array = []

page_parsed.css('.r a').map do |a|

threads = []
link_array.each do |item|
  threads << {explore(item)}

threads.each { |t| t.join }

(This code actually get 14 000 combos, and it just explore the 1st page of google results)

Thank for reading ! Please comments and feel free to improve the code :smiley:

(oaktree) #2

So what exactly is the point of doing this? And wouldn’t writing to the same file from multiple Threads pose a data race issue?


The point is to get free combos for your stuff (it could be making a dictionnary for brute force, test those combos with Sentry MBA to get websites account etc… )
I just wanted to show how it’s possible to “crawl” a website and get specific content.

I’m not aware of threading problems (i’m kinda beginner at it ) so maybe another member can check and answer ?


instead of navigating to
You can go to
That way you don’t need to parse the html elements of


Indeed ! didn’t knew that thank’s !

(system) #6

This topic was automatically closed after 30 days. New replies are no longer allowed.