{"id":654,"date":"2026-03-30T17:00:29","date_gmt":"2026-03-30T11:30:29","guid":{"rendered":"https:\/\/dealsflow.co\/blog\/?p=654"},"modified":"2026-03-30T17:00:29","modified_gmt":"2026-03-30T11:30:29","slug":"how-to-scrape-linkedin-data-legally","status":"publish","type":"post","link":"https:\/\/dealsflow.co\/blog\/how-to-scrape-linkedin-data-legally\/","title":{"rendered":"How to Scrape LinkedIn Data Legally in 2026 (Without Getting Blocked)"},"content":{"rendered":"<p>LinkedIn is the world&#8217;s largest professional network, holding detailed career data on more than 1 billion members across 200 countries. For sales teams building prospect lists, recruiters hunting for top candidates, and market researchers tracking industry trends, that data is enormously valuable. The problem is that most guides on LinkedIn scraping either make it sound completely risk-free or treat it as purely illegal. Neither is accurate.<\/p>\n<p>The truth sits in the middle. LinkedIn scraping occupies a legal grey zone shaped by court rulings, platform terms, and privacy regulations \u2014 all pulling in different directions. Done correctly, with the right methods and a clear understanding of the rules, it is possible to extract publicly available LinkedIn data for legitimate business use without triggering account bans or regulatory penalties. Done carelessly, it can cost you your account, your data, and in commercial-scale cases, serious legal exposure.<\/p>\n<p>This guide covers everything you need to know: the honest legal picture, how LinkedIn detects and stops scrapers, the five mistakes that get people banned, the three main methods for accessing data legally, the technical safeguards that keep you under the radar, and what to do with your data once you have it. The goal is not to help you exploit LinkedIn \u2014 it is to help you use its public data intelligently, compliantly, and without getting shut down.<\/p>\n<h2>What Is LinkedIn Scraping? (And Why People Get It Wrong)<\/h2>\n<p>LinkedIn scraping refers to the automated extraction of data from LinkedIn&#8217;s web pages \u2014 profile information, company pages, job listings, search results, and post engagement data \u2014 using software rather than manual browsing. Instead of a human clicking through profiles one by one, a scraping tool sends automated requests to LinkedIn&#8217;s servers, retrieves the HTML or API responses, and parses out the structured data.<\/p>\n<p>Two things are commonly misunderstood about this practice. First, scraping is not the same as hacking. LinkedIn itself has publicly clarified that unauthorized scraping is not a breach of its secure systems \u2014 it is the automated collection of data that is already publicly visible to anyone with a browser. Second, scraping is not inherently illegal. Whether it is legal or not depends on what data is being collected, from where, under what conditions, and what happens to it afterward.<\/p>\n<p>Professionals use LinkedIn scraping for a range of legitimate purposes. Sales and business development teams build targeted outreach lists of decision-makers by extracting names, job titles, and companies from public profiles. Recruiters source candidates at scale by filtering profiles against specific skills, seniority levels, and locations. Market researchers track hiring trends, company growth, and competitive intelligence by monitoring company pages and job postings. Outreach teams personalize messages by scraping recent post activity to understand a prospect&#8217;s current focus areas.<\/p>\n<p>Understanding what scraping actually is \u2014 and separating fact from misinformation \u2014 is the starting point for doing it responsibly.<\/p>\n<h2>Is LinkedIn Scraping Legal in 2026? (The Honest Answer)<\/h2>\n<p>Legality here is not a single question with a single answer. It operates across three distinct legal layers, each of which carries different risks and applies in different circumstances. Treating any one of them as the whole picture is where most people go wrong.<\/p>\n<h3>The hiQ Labs v. LinkedIn Ruling \u2014 What It Actually Means<\/h3>\n<p>The most important legal precedent for LinkedIn scraping came out of a long-running dispute between LinkedIn and a data analytics company called hiQ Labs. hiQ was scraping publicly visible LinkedIn profiles to power an employee retention product, and LinkedIn tried to stop them by sending a cease-and-desist letter and blocking their access.<\/p>\n<p>hiQ sued, and the case worked its way through the U.S. federal courts over several years. In April 2022, the U.S. Ninth Circuit Court of Appeals reaffirmed its earlier decision that scraping publicly available web data does not violate the Computer Fraud and Abuse Act (CFAA) \u2014 the primary federal anti-hacking statute. The court applied a &#8220;gates-up-or-down&#8221; framework from a Supreme Court case called Van Buren v. United States, reasoning that a publicly accessible website has no gates to raise or lower, and therefore accessing it cannot constitute unauthorized access under the CFAA.<\/p>\n<p>However, the story did not end there. In December 2022, after LinkedIn pursued a breach-of-contract claim \u2014 not a CFAA claim \u2014 the California district court ruled that the anti-scraping provisions in LinkedIn&#8217;s User Agreement are enforceable under contract law. The parties settled, with hiQ agreeing to a $500,000 judgment and a permanent injunction requiring it to stop scraping and destroy all data it had collected.<\/p>\n<p>The bottom line: the Ninth Circuit&#8217;s CFAA ruling remains intact and still represents the strongest U.S. legal precedent in favor of scraping public data. Courts have consistently declined to apply the CFAA to automated collection of publicly accessible information. But courts are far more comfortable enforcing platform terms through contract law claims, particularly against commercial-scale operators. A ToS violation is not a criminal offense \u2014 but it can still result in civil liability, especially when you have signed the agreement by creating an account.<\/p>\n<h3>Layer 1 \u2014 Criminal Law (CFAA)<\/h3>\n<p>The CFAA is an anti-hacking statute originally designed to prosecute people who break into computer systems without authorization. Its application to web scraping has been narrowed significantly by the hiQ ruling. Scraping data that is publicly accessible without a login \u2014 data with the gates already up \u2014 is unlikely to constitute a CFAA violation under current Ninth Circuit precedent.<\/p>\n<p>The line is crossed when scraping involves accessing data that requires authentication and has not been authorized. Using fake accounts to access login-gated data, bypassing technical access controls, or accessing private sections of the platform can move scraping into potential CFAA violation territory. The December 2022 hiQ settlement included a CFAA violation stipulation specifically tied to hiQ&#8217;s use of fake accounts to access login-protected pages \u2014 not to its scraping of public profiles.<\/p>\n<p>For most individual users scraping publicly visible data, criminal exposure under the CFAA is low. The pattern of enforcement has concentrated on commercial-scale operators with aggressive practices, not on businesses extracting a few thousand public profiles for internal use.<\/p>\n<h3>Layer 2 \u2014 Civil and Contract Risk<\/h3>\n<p>This is the layer most people underestimate. When you create a LinkedIn account, you agree to its User Agreement, which prohibits scraping, automated data collection, and the use of bots. By accepting that agreement, you enter a contract \u2014 and courts have confirmed that LinkedIn&#8217;s anti-scraping provisions are enforceable under breach-of-contract theory.<\/p>\n<p>The practical consequence is direct: scraping while logged into your own account ties any enforcement outcome to your profile. LinkedIn can terminate your account without notice, pursue civil claims against commercial-scale scrapers, and obtain injunctions requiring the destruction of scraped data. The hiQ settlement is the clearest illustration of what this looks like in practice.<\/p>\n<p>Real-world enforcement follows a pattern. Most individuals who violate the terms see account restrictions \u2014 temporary limits on their ability to view profiles, send messages, or use search. Civil claims and injunctions are reserved for organizations scraping at commercial scale, particularly those monetizing the data or competing directly with LinkedIn&#8217;s own products. That pattern of enforcement should not be mistaken for permissiveness \u2014 it reflects where LinkedIn&#8217;s legal team focuses its resources, not where the legal risk ends.<\/p>\n<h3>Layer 3 \u2014 Privacy Law (GDPR and CCPA)<\/h3>\n<p>Even when scraping is legally permissible under criminal and contract frameworks, the moment you collect personal data belonging to individuals, you are processing personal data \u2014 and that triggers privacy law obligations in multiple jurisdictions.<\/p>\n<p>Under the European Union&#8217;s General Data Protection Regulation (GDPR), any information relating to an identifiable natural person \u2014 names, job titles, email addresses, photos \u2014 constitutes personal data. Processing it requires a lawful basis under Article 6. In December 2024, France&#8217;s data protection authority (CNIL) fined a LinkedIn data company called KASPR \u20ac240,000 for scraping contact details from LinkedIn users who had chosen to limit the visibility of their information to their 1st and 2nd-degree connections. The CNIL found that collecting data beyond what users had made publicly visible exceeded any reasonable expectation \u2014 and that even for lawfully collected data, KASPR&#8217;s five-year retention periods were disproportionate.<\/p>\n<p>The GDPR&#8217;s legitimate interests basis \u2014 the most commonly cited basis for commercial scraping \u2014 requires passing a three-part test: the purpose must be legitimate, the processing must be necessary for that purpose, and the interests of the scraper must not be overridden by the rights and freedoms of the individuals whose data is being processed. The European Data Protection Board has made clear that public availability alone is not enough to establish this.<\/p>\n<p>The California Consumer Privacy Act (CCPA) imposes similar requirements for data belonging to California residents, including the right to know what data has been collected, the right to delete it, and restrictions on selling or sharing it without opt-out mechanisms.<\/p>\n<p>Practically, this means that if you store scraped LinkedIn data, you need to document your lawful basis, minimize what you collect to what you actually need, set retention limits, and have a way to honor deletion requests. The GDPR applies to any organization processing EU residents&#8217; data regardless of where the organization is based.<\/p>\n<h3>Public Data vs. Private Data \u2014 The Line You Cannot Cross<\/h3>\n<p>The clearest rule in LinkedIn scraping is also the most important one: public data is lower risk, private data is off-limits.<\/p>\n<div class=\"df-table-scroll\">\n<table>\n<thead>\n<tr>\n<th>Data Type<\/th>\n<th>Publicly Available<\/th>\n<th>Lower Legal Risk<\/th>\n<th>Notes<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Name, headline, location<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>On public profiles only<\/td>\n<\/tr>\n<tr>\n<td>Current job title and company<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>Public profiles<\/td>\n<\/tr>\n<tr>\n<td>Work history and education<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>If profile visibility is set to public<\/td>\n<\/tr>\n<tr>\n<td>Skills and endorsements<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>If profile visibility is set to public<\/td>\n<\/tr>\n<tr>\n<td>Profile picture URL<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>Public profiles<\/td>\n<\/tr>\n<tr>\n<td>Job postings<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>Fully public<\/td>\n<\/tr>\n<tr>\n<td>Company page data<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>Follower counts, descriptions, updates<\/td>\n<\/tr>\n<tr>\n<td>Post engagement (likes, comments)<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>On public profiles and pages<\/td>\n<\/tr>\n<tr>\n<td>Email addresses<\/td>\n<td>No<\/td>\n<td>No<\/td>\n<td>Behind login; privacy law risk<\/td>\n<\/tr>\n<tr>\n<td>Phone numbers<\/td>\n<td>No<\/td>\n<td>No<\/td>\n<td>Private data<\/td>\n<\/tr>\n<tr>\n<td>Private messages<\/td>\n<td>No<\/td>\n<td>No<\/td>\n<td>Accessing these is illegal<\/td>\n<\/tr>\n<tr>\n<td>3rd-degree connection data<\/td>\n<td>No<\/td>\n<td>No<\/td>\n<td>Requires login<\/td>\n<\/tr>\n<tr>\n<td>Data from users who restricted visibility<\/td>\n<td>No<\/td>\n<td>No<\/td>\n<td>The KASPR fine confirms this is unlawful<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p>If a piece of data requires a login to see, or if the user has set their profile or contact details to restricted visibility, it is not public data and should not be scraped.<\/p>\n<h2>Why LinkedIn Is So Hard to Scrape in 2026<\/h2>\n<p>Most guides treat LinkedIn&#8217;s anti-scraping measures as a brief side note. In practice, they are the central technical challenge \u2014 and understanding them is essential to not getting blocked. LinkedIn is not a passive website. It actively invests in detection systems that operate continuously, learn from new signals, and are updated multiple times per day.<\/p>\n<h3>LinkedIn&#8217;s Multi-Layer Anti-Scraping Defense System<\/h3>\n<p>LinkedIn&#8217;s defenses combine authentication walls, behavioral tracking, and request fingerprinting into a multi-layer system that assigns a fraud score to each visitor in real time. Any single layer alone would be insufficient; the system works because these layers reinforce one another.<\/p>\n<p>LinkedIn has publicly stated that its abuse detection models are retrained and automatically deployed several times per day to adapt to new scraping patterns. This means that a technique that works today may stop working within hours if LinkedIn&#8217;s models detect a new pattern and update accordingly.<\/p>\n<h3>1. The Authentication Wall<\/h3>\n<p>Most of LinkedIn&#8217;s valuable data \u2014 detailed profiles, full work histories, contact information, Sales Navigator results \u2014 sits behind a login requirement. The moment you authenticate with your own account to access this data, you are operating under a contract you accepted when you signed up. Any data collected while logged in is collected in breach of that contract if it involves automated extraction.<\/p>\n<p>Unauthenticated scraping of fully public data avoids the contract issue but is severely limited in scope. Without a session, LinkedIn typically restricts access to around 50 profile views per day per IP address before triggering friction or blocking access entirely.<\/p>\n<h3>2. Behavioral Tracking and Rate Limiting<\/h3>\n<p>LinkedIn monitors the number of requests coming from a single IP address or account, the speed at which pages are being accessed, and the patterns of navigation between them. Too many requests in a short period trigger flags and lead to temporary access restrictions or IP bans. LinkedIn uses deep learning to classify sequences of user behavior as automated, and outlier detection to identify activity that deviates from normal human patterns.<\/p>\n<p>A real human browsing LinkedIn does not access 40 profiles in 40 seconds. They scroll, pause, read, move their mouse in non-linear paths, and navigate between pages at irregular intervals. A bot almost always has a tell \u2014 consistent timing, linear navigation, uniform request patterns \u2014 and LinkedIn&#8217;s behavioral models are trained on millions of real user sessions to identify exactly those tells.<\/p>\n<h3>3. Request Fingerprinting and TLS Analysis<\/h3>\n<p>LinkedIn checks far more than IP addresses. When a request is sent to LinkedIn&#8217;s servers, a TLS handshake occurs, and the characteristics of that handshake \u2014 including a signature called a JA3 fingerprint \u2014 are evaluated. Headless browsers, even with stealth plugins, often produce TLS fingerprints that do not match the fingerprints of real browsers. LinkedIn cross-references this against other signals: browser-specific headers, cookies, device attributes, screen resolution, installed fonts, WebGL rendering capabilities, and more.<\/p>\n<p>LinkedIn analyzes over 50 browser characteristics to generate a unique fingerprint for each visitor. Datacenter IP addresses \u2014 the type used by cloud providers like AWS, Google Cloud, and DigitalOcean \u2014 are flagged almost immediately, because LinkedIn maintains comprehensive blacklists of known datacenter IP ranges and knows that real users do not browse LinkedIn from AWS data centers.<\/p>\n<p>The result is that a scraper can pass 95% of fingerprint checks and still be identified and blocked because one signal fails.<\/p>\n<h3>4. How LinkedIn&#8217;s Frontend Loads Data (And Why It Matters)<\/h3>\n<p>LinkedIn uses multiple methods to load data onto its pages, and each one has different implications for scraping:<\/p>\n<p><strong>HTML template rendering<\/strong>\u00a0is the simplest method \u2014 the page structure arrives as HTML that a parser can read directly. This is the most scrape-friendly format, but it is used for only a limited subset of LinkedIn&#8217;s pages.<\/p>\n<p><strong>JavaScript hydration via script tags<\/strong>\u00a0means that the HTML skeleton arrives first, and the actual data is injected into the page by JavaScript running in the browser. A scraper that only downloads HTML without executing JavaScript will receive an empty shell instead of usable data, which means a full JavaScript-rendering environment \u2014 such as a headless browser \u2014 is required.<\/p>\n<p><strong>XHR calls to LinkedIn&#8217;s internal Voyager API<\/strong>\u00a0are used for dynamic data loading as the user scrolls or interacts with the page. The Voyager API is LinkedIn&#8217;s internal API, not a public one, and it is not documented or officially available. Its endpoints change frequently, and frontend element structures change on a similarly regular cadence. This makes scrapers that rely on specific Voyager endpoints highly brittle \u2014 they require constant maintenance as LinkedIn updates its platform.<\/p>\n<h2>The 5 Mistakes That Get You Blocked (Or Banned)<\/h2>\n<p>Understanding how LinkedIn&#8217;s defenses work makes the common mistakes obvious. These are the five behaviors that account for the vast majority of bans and blocks.<\/p>\n<p><strong>Scraping at bot speed.<\/strong>\u00a0Visiting dozens of profiles per minute is the single clearest signal that an account is automated. LinkedIn&#8217;s rate limiting kicks in fast. Without authentication, access is limited to roughly 50 profile views per day per IP before friction begins. With a logged-in session, the threshold is higher but still well within the range of what a diligent human researcher could produce in a day. Exceeding it is immediately suspicious.<\/p>\n<p><strong>Using datacenter IPs or known VPNs.<\/strong>\u00a0LinkedIn maintains extensive blacklists of IP ranges associated with cloud hosting providers, commercial VPN services, and known proxy providers. Traffic arriving from these IP ranges is flagged before a single page is loaded. A datacenter proxy that successfully routes a request to LinkedIn&#8217;s servers does not mean that request will receive real data \u2014 LinkedIn can serve empty pages, trigger CAPTCHAs, or silently log the IP for future blocking.<\/p>\n<p><strong>Targeting data behind the login wall.<\/strong>\u00a0Scraping emails, phone numbers, private connection data, or any information that requires authentication to access combines the highest legal risk with the highest detection risk. This data is not public, accessing it requires a live session cookie, and using that session for automated extraction is both a contract violation and potentially a CFAA violation if fake accounts are involved.<\/p>\n<p><strong>Using browser extensions for scale.<\/strong>\u00a0Browser extensions that automate LinkedIn actions \u2014 visiting profiles, sending connection requests, extracting data \u2014 operate inside the user&#8217;s real LinkedIn session, which means they interact with LinkedIn through the same browser that the user uses for legitimate activity. LinkedIn can detect extension-driven behavior through the same behavioral analysis it applies to external scrapers, and it has become increasingly effective at doing so. Using an extension does not make scraping invisible \u2014 it just changes the vector.<\/p>\n<p><strong>Ignoring early warning signals.<\/strong>\u00a0CAPTCHAs, repeated re-authentication requests, and sudden drops in the volume of profile data returned are LinkedIn telling you that it has noticed something unusual. Most people push through these signals and continue scraping, which accelerates the timeline to a permanent ban. Treating these signals as stop signs \u2014 pausing activity, rotating sessions, and reducing volume \u2014 gives an account a chance to recover before a full ban is applied.<\/p>\n<h2>How to Scrape LinkedIn Legally Without Getting Blocked \u2014 The Right Approach<\/h2>\n<p>With the legal framework and detection landscape understood, the question becomes which method is appropriate for which situation. There are three main approaches, each suited to a different type of user and use case.<\/p>\n<h3>Method 1 \u2014 No-Code LinkedIn Scraping Tools (Recommended for Most Users)<\/h3>\n<p>No-code scraping tools are purpose-built cloud platforms that handle anti-detection infrastructure, rate limiting, and session management on behalf of the user. They are the right choice for sales teams, recruiters, and marketers who need LinkedIn data regularly but do not have dedicated engineering resources to build and maintain scrapers.<\/p>\n<p>These tools work by running automated actions through cloud infrastructure that manages proxy rotation, human-behavior emulation, and request pacing. The best ones are designed to operate within ranges that minimize detection risk, and they update their internal logic as LinkedIn changes its defenses.<\/p>\n<p>The leading options in 2026, with their key characteristics, are as follows:<\/p>\n<p><strong>PhantomBuster<\/strong>\u00a0is one of the most established platforms in the space, offering over 100 pre-built automation sequences called Phantoms that cover LinkedIn profiles, Sales Navigator, job listings, event attendees, and post engagement. It is cloud-based, supports action chaining (for example, scraping profiles and then sending connection requests in sequence), and integrates natively with HubSpot, Salesforce, and Pipedrive. Its pricing starts at around $56 per month on annual billing. It is best suited for users who need to combine data extraction with outreach automation across multiple LinkedIn surfaces.<\/p>\n<p><strong>Evaboot<\/strong>\u00a0is a Chrome extension that works exclusively with LinkedIn Sales Navigator. It is designed to export search results directly to a clean CSV file with one click, and it automatically removes duplicate entries, fixes formatting errors, and filters out leads that do not actually match the search filters \u2014 a real problem given that roughly 30% of Sales Navigator search results include leads that fall outside the applied filters. It also finds and verifies professional email addresses. Pricing starts at $9 per month, but a LinkedIn Sales Navigator subscription (which starts at around $99 per month) is required separately. Evaboot is best for sales professionals who live in Sales Navigator and want clean, structured export data without additional complexity.<\/p>\n<p><strong>Captain Data<\/strong>\u00a0is a no-code data automation platform designed for operations teams that need to synchronize LinkedIn data across multiple business tools. It handles complex workflows involving extraction, enrichment, and CRM integration, and it supports account rotation to distribute scraping activity across multiple LinkedIn accounts. Its pricing starts at around $164 per month, reflecting its enterprise positioning. It is best suited for teams that need LinkedIn scraping as one input into a broader automated data pipeline.<\/p>\n<p><strong>Vayne.io<\/strong>\u00a0is a developer-friendly API that uses residential proxy infrastructure to access LinkedIn data programmatically. It provides structured JSON output for profiles, companies, and job listings, and it is designed for teams that want API access without building their own scraping infrastructure. It is best for technical users who need to integrate LinkedIn data into internal systems or data warehouses.<\/p>\n<p><strong>PhantomBuster, Evaboot, and Captain Data<\/strong>\u00a0are all specialized tools. When selecting any no-code tool, four features should be non-negotiable: residential proxy infrastructure rather than datacenter IPs, built-in rate limiting and randomized delays, operation through cloud execution rather than browser extensions for scale, and explicit documentation of what data the tool accesses and under what conditions.<\/p>\n<h3>Method 2 \u2014 API-First Scraping (Recommended for Developers)<\/h3>\n<p>For developers and technical teams, the API route offers more flexibility and scalability than no-code tools \u2014 but comes with its own complications depending on whether the official LinkedIn API or a third-party alternative is used.<\/p>\n<p><strong>LinkedIn&#8217;s Official API<\/strong>\u00a0is a set of RESTful APIs that provides programmatic access to LinkedIn data and functionality. Since 2015, access to any meaningful data through the official API has required joining LinkedIn&#8217;s Partner Program \u2014 a formal approval process that LinkedIn runs at its discretion. Partner status is generally reserved for established companies with proven track records, existing user bases, clear compliance with data protection laws, and business models that align with LinkedIn&#8217;s own commercial interests. The approval process can take six to twelve months, and LinkedIn declines the majority of applications.<\/p>\n<p>Even for approved partners, the official API is severely limited for data-collection purposes. The Marketing API restricts the use of member data to managing LinkedIn pages and ad campaigns \u2014 it explicitly prohibits using member data for sales prospecting, recruiting, lead generation, CRM enrichment, or audience building. The Sales API and Talent API provide access to Sales Navigator and recruiting data respectively, but only at enterprise pricing levels (estimated at $7,200 to $50,000+ per year) and with strict compliance requirements. Partners are capped at 500 API calls per user per day on basic access tiers.<\/p>\n<p>For most businesses that need LinkedIn data, the official API is not a practical option. It is appropriate for large enterprises that already have established LinkedIn partner relationships, companies that only need user authentication (the &#8220;Sign in with LinkedIn&#8221; button), and organizations spending $100,000 or more per month on LinkedIn advertising.<\/p>\n<p><strong>Third-party LinkedIn data providers<\/strong>\u00a0\u2014 including platforms like Bright Data, ScrapFly, and similar services \u2014 offer API access to LinkedIn data by managing the scraping infrastructure themselves. They handle proxy rotation, browser fingerprinting, session management, and CAPTCHA solving, and expose the results through a clean API interface. Pricing is typically per profile or per company lookup, or through subscription tiers with request caps, rather than the partner-program commitment required for the official API.<\/p>\n<p>These providers sit in a legally ambiguous position \u2014 they are building on top of LinkedIn&#8217;s platform without official authorization \u2014 but they represent the practical choice for the vast majority of developers who need LinkedIn data at scale. The key question when evaluating a provider is whether they have transparent pricing, well-documented endpoints, and clear statements about what data they access and how they handle it.<\/p>\n<p>When choosing between the official LinkedIn API and a third-party provider, the decision generally comes down to scale and risk tolerance. The official API is the compliance gold standard but is inaccessible to most businesses. Third-party APIs provide the data access most businesses actually need, at predictable pricing and without a year-long approval process, but they operate without official authorization.<\/p>\n<h3>Method 3 \u2014 Building Your Own Scraper (For Developers Who Accept the Risk)<\/h3>\n<p>Building a custom LinkedIn scraper is technically feasible but operationally demanding. The maintenance burden alone makes it impractical for most teams \u2014 LinkedIn changes its DOM structure regularly, CSS selectors that work today break without warning, and each change requires debugging, testing, and redeployment.<\/p>\n<p>The core technical approach involves Python with either Playwright or Selenium for JavaScript rendering, combined with BeautifulSoup for HTML parsing. For authenticated scraping, the two session tokens required are\u00a0<code>li_at<\/code>\u00a0(LinkedIn&#8217;s authentication cookie) and\u00a0<code>JSESSIONID<\/code>, both extractable from browser DevTools while logged into LinkedIn. These cookies expire every few days and must be refreshed regularly.<\/p>\n<p>The exposure created by authenticated scraping is significant: the session is tied to a real LinkedIn account, and any detection of automated behavior leads directly back to that account. Multiple developers share access to the same session token create cross-device fingerprinting signals that LinkedIn&#8217;s behavior models are specifically trained to detect.<\/p>\n<p>For public data only, a Playwright-based scraper with human-like behavior emulation can function without session authentication, but is limited to the roughly 50 profile views per IP per day that LinkedIn permits before triggering friction on unauthenticated sessions.<\/p>\n<p>Any team that builds a custom scraper should be prepared to invest ongoing engineering time in maintenance, manage a residential proxy infrastructure, implement session refresh logic, handle CAPTCHAs, and rebuild selectors each time LinkedIn updates its frontend. This is not a build-once solution \u2014 it is an ongoing system that requires active maintenance and will occasionally break without warning.<\/p>\n<h2>Technical Safeguards to Stay Undetected<\/h2>\n<p>Regardless of which method is used, a consistent set of technical safeguards is required to minimize detection risk. These are not optional optimizations \u2014 they are the baseline requirements for operating at any meaningful scale.<\/p>\n<h3>Proxy Strategy<\/h3>\n<p>The single most important technical safeguard is proxy quality. Residential proxies route traffic through IP addresses assigned by consumer internet service providers to real home users. Because these addresses belong to genuine ISP customers, LinkedIn cannot block them without also blocking legitimate users on the same ISP network. Datacenter proxies \u2014 the cheap, widely available alternative \u2014 are blocklisted by LinkedIn almost immediately and should not be used.<\/p>\n<p>The leading residential proxy providers for LinkedIn use in 2026 include Bright Data, IPRoyal, Oxylabs, and Smartproxy. Bright Data offers a pool of over 150 million residential IPs across 195 countries, with sticky session support of up to 30 minutes \u2014 sufficient for most LinkedIn scraping sessions. IPRoyal provides up to 7-day sticky sessions, which is particularly useful for authenticated scraping where consistent session persistence matters. Smartproxy offers a 65-million-IP pool at around $2.20 per gigabyte, making it significantly cheaper than the enterprise providers for mid-scale operations. Oxylabs sits between Bright Data and Smartproxy in both price and features, with a 30-minute sticky session cap.<\/p>\n<p>Key practices for proxy management: limit requests to 10\u201315 per IP per hour on profile pages, rotate IP addresses to prevent per-IP rate limit accumulation, use sticky sessions (10\u201330 minutes) so that authenticated sessions remain consistent across a scraping run, and assign one proxy exclusively to one account \u2014 sharing proxies across multiple accounts creates cross-traffic that LinkedIn flags as bot-like behavior.<\/p>\n<h3>Mimicking Human Behavior<\/h3>\n<p>Technical proxy management keeps IP addresses from being blocked. Behavioral emulation keeps the actions taken from being flagged as automated.<\/p>\n<p>Introduce random delays of 2 to 8 seconds between page loads and actions, rather than fixed intervals. Fixed timing \u2014 every request arriving exactly 3 seconds apart \u2014 is statistically anomalous and detectable. Randomization within a realistic range mirrors the natural variability of human browsing. Trigger JavaScript events like scrolling and mouse movement to simulate the way a real user would read a page. Navigate via the homepage or feed before going directly to target profiles, rather than jumping straight to profile URLs \u2014 direct navigation to specific URLs is a common tell for scrapers.<\/p>\n<p>For volume, keeping requests to under 100 profiles per session and no more than 2,000 public profiles per account per day, distributed across multiple IP addresses, is the threshold below which automated detection is significantly less likely to trigger. These are not hard limits \u2014 LinkedIn&#8217;s detection is probabilistic, not binary \u2014 but they represent the operating range that keeps activity within normal human variation.<\/p>\n<p>Before running any automation, build trust on the account gradually. A new LinkedIn account that immediately begins high-volume profile viewing will be flagged faster than an account with an established history of normal activity. Warming up an account over days or weeks before deploying automation reduces the initial risk.<\/p>\n<h3>Session and Infrastructure Management<\/h3>\n<p>Run scraping through cloud execution on a consistent schedule, rather than manually triggering sessions from a personal device. Behavior spikes from inconsistent scheduling \u2014 running a scraper for 6 hours and then not running it for a week and then running it again for 12 hours \u2014 create patterns that LinkedIn&#8217;s outlier detection models are designed to catch.<\/p>\n<p>Monitor for early friction signals as a continuous practice. CAPTCHAs, re-authentication prompts, and unusual latency are LinkedIn&#8217;s escalating warnings before a full ban. Treat each of these as a signal to stop and reduce activity, not as an obstacle to route around.<\/p>\n<p>Rotate and revoke session access at the device level. Session tokens that are shared across multiple devices or environments create fingerprinting inconsistencies. If a session has been used on one machine, it should not be reused from a different machine or cloud environment without refreshing the authentication first.<\/p>\n<h3>Respecting robots.txt and Rate Limits<\/h3>\n<p>LinkedIn&#8217;s\u00a0<code>robots.txt<\/code>\u00a0file specifies which pages and paths automated agents are permitted to access. Before scraping any new endpoint or section of the platform, checking\u00a0<code>robots.txt<\/code>\u00a0to confirm whether it is designated as accessible for automated crawlers is basic compliance practice. Ignoring\u00a0<code>robots.txt<\/code>\u00a0does not change the legal risk materially, but it removes any argument that the scraping was conducted in good faith.<\/p>\n<p>Never send volume that could meaningfully impact LinkedIn&#8217;s server performance. Aggressive request rates are not just a detection risk \u2014 they are also a legal and ethical issue, and they are one of the behaviors that courts have used to distinguish acceptable scraping from trespass to chattels claims.<\/p>\n<h2>What Data Can You Legally Scrape From LinkedIn?<\/h2>\n<p>The clearest practical guide to what is safe to collect is a combination of what is publicly visible without authentication and what GDPR and CCPA compliance requires you to limit your collection to.<\/p>\n<div class=\"df-table-scroll\">\n<table>\n<thead>\n<tr>\n<th>Data Type<\/th>\n<th>Publicly Available<\/th>\n<th>Lower Legal Risk<\/th>\n<th>Notes<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Name, headline, location<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>On public profiles<\/td>\n<\/tr>\n<tr>\n<td>Current job title and company<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>Public profiles<\/td>\n<\/tr>\n<tr>\n<td>Work history and education<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>If profile is publicly visible<\/td>\n<\/tr>\n<tr>\n<td>Skills and endorsements<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>If profile is publicly visible<\/td>\n<\/tr>\n<tr>\n<td>Profile picture URL<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>Public profiles<\/td>\n<\/tr>\n<tr>\n<td>Job postings<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>Fully public<\/td>\n<\/tr>\n<tr>\n<td>Company page data<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>Descriptions, follower counts, updates<\/td>\n<\/tr>\n<tr>\n<td>Post engagement data<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<td>Likes and comments on public posts<\/td>\n<\/tr>\n<tr>\n<td>Email addresses<\/td>\n<td>No<\/td>\n<td>No<\/td>\n<td>Behind login; GDPR\/CCPA risk<\/td>\n<\/tr>\n<tr>\n<td>Phone numbers<\/td>\n<td>No<\/td>\n<td>No<\/td>\n<td>Private data<\/td>\n<\/tr>\n<tr>\n<td>Private messages<\/td>\n<td>No<\/td>\n<td>No<\/td>\n<td>Accessing these is unlawful<\/td>\n<\/tr>\n<tr>\n<td>3rd-degree connection data<\/td>\n<td>No<\/td>\n<td>No<\/td>\n<td>Requires login<\/td>\n<\/tr>\n<tr>\n<td>Data from restricted-visibility profiles<\/td>\n<td>No<\/td>\n<td>No<\/td>\n<td>KASPR fine confirms this is unlawful<\/td>\n<\/tr>\n<tr>\n<td>Contact info set to limited visibility<\/td>\n<td>No<\/td>\n<td>No<\/td>\n<td>Collecting this is a GDPR violation<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p>A useful guiding principle: if a non-logged-in visitor browsing LinkedIn in a private browser window cannot see it, you should not be collecting it.<\/p>\n<h2>Use Cases \u2014 Why People Scrape LinkedIn (And Which Methods Fit Each)<\/h2>\n<p><strong>Sales and lead generation.<\/strong>\u00a0Sales teams use LinkedIn scraping to build targeted prospect lists of decision-makers at companies that match their ideal customer profile. The data extracted \u2014 names, job titles, company names, and locations from public profiles \u2014 feeds into outreach sequences. The best method for this use case is a no-code tool like PhantomBuster or Evaboot, which export data directly to CSV or integrate with CRM platforms. Enrichment with verified email addresses should be handled through a separate tool to avoid collecting contact data that LinkedIn users have not made public.<\/p>\n<p><strong>Recruiting and talent sourcing.<\/strong>\u00a0Recruiters scrape LinkedIn to source candidates at scale, filtering by skills, job titles, seniority levels, and locations. The data extracted feeds into applicant tracking systems (ATS) for further evaluation. No-code tools with ATS integrations \u2014 PhantomBuster for engagement automation, Evaboot for Sales Navigator exports, Captain Data for complex team-based workflows \u2014 are the most appropriate methods. This use case is also well served by third-party APIs that provide structured profile data.<\/p>\n<p><strong>Market research and competitive intelligence.<\/strong>\u00a0Researchers track company page growth, hiring velocity (as a signal of company expansion or contraction), post engagement patterns, and the seniority distribution of a competitor&#8217;s employee base. This use case is well-suited to either a custom scraper or a third-party API, depending on the technical resources available. The data involved is generally public company-level information rather than personal profile data, which reduces privacy law risk.<\/p>\n<p><strong>Outreach personalization.<\/strong>\u00a0Rather than bulk-scraping profiles, some teams scrape engagement data from specific posts \u2014 the people who liked or commented on a particular piece of content \u2014 to identify prospects who have demonstrated active interest in a topic. This intent-based scraping produces higher-quality lead lists than bulk profile extraction and typically operates at lower volume, reducing detection risk. PhantomBuster and similar tools can extract event attendees and post engagers as well as profile data.<\/p>\n<p><strong>Investment research.<\/strong>\u00a0Investors track hiring trends, leadership changes, and company-page signals to build a picture of a company&#8217;s trajectory. The data involved is almost entirely public company-level information, which represents the lowest legal risk category.<\/p>\n<h2>What to Do With Your Scraped LinkedIn Data<\/h2>\n<p>Extracting data is only the beginning. What happens to it afterward determines your compliance exposure as much as how you collected it.<\/p>\n<p><strong>Cleaning and deduplicating.<\/strong>\u00a0Raw scraped data contains formatting errors, duplicate entries, incorrectly matched fields, and outdated information. Tools like Clay allow for automated data cleaning, deduplication, and enrichment before the data is used for outreach or imported into a CRM. Evaboot handles some of this automatically during the export process, removing emojis, fixing capitalization errors, and filtering leads that do not match the applied search filters.<\/p>\n<p><strong>Email enrichment.<\/strong>\u00a0Verified email addresses are not available from LinkedIn scraping of public profiles without a login. Enrichment tools \u2014 which cross-reference scraped profile data against their own databases of professional emails \u2014 can append email addresses to a list of names and companies. This process should be limited to professional email addresses, should include a verification step to reduce bounce rates, and should be conducted with awareness of the GDPR and CCPA obligations that apply to holding contact data.<\/p>\n<p><strong>CRM import.<\/strong>\u00a0Most scraping tools export to CSV, which can be imported directly into HubSpot, Salesforce, Pipedrive, and similar platforms. PhantomBuster provides native two-way HubSpot integration and one-way connections to Salesforce and Pipedrive, which removes the need for middleware and reduces the risk of broken integrations.<\/p>\n<p><strong>GDPR and CCPA compliance after collection.<\/strong>\u00a0Once you hold scraped data, you have ongoing obligations. Document your lawful basis for processing \u2014 most commercial LinkedIn scraping will rely on legitimate interests under GDPR Article 6(1)(f), which requires you to demonstrate that your interests are proportionate and do not override the individuals&#8217; rights. Set retention limits: the CNIL found KASPR&#8217;s five-year retention period to be disproportionate. Enable deletion workflows so that individuals who request removal of their data from your systems can be handled promptly. Inform individuals that their data has been collected, either proactively or when they make contact.<\/p>\n<p><strong>Hard prohibitions on data use.<\/strong>\u00a0Scraped LinkedIn data should never be used for decisions related to credit, insurance, or employment screening. These are explicit regulatory red lines in multiple jurisdictions, and using professional profile data for these purposes without proper legal authorization creates significant liability. Never resell scraped data without clear contractual rights to do so. Never combine scraped data with other datasets to create consumer profiles that go beyond what LinkedIn users would reasonably expect their public information to be used for.<\/p>\n<h2>LinkedIn Scraping in 2026 \u2014 What&#8217;s Changed and What&#8217;s Coming<\/h2>\n<p>The landscape for LinkedIn scraping has tightened materially in the past two years. Detection has become more sophisticated, enforcement more consistent, and the window for simple technical workarounds has narrowed.<\/p>\n<p>LinkedIn&#8217;s anti-bot systems now incorporate AI-driven behavioral analysis that can detect micro-patterns in browsing behavior that are essentially impossible for automation tools to replicate perfectly. CAPTCHA frequency has increased, messaging caps for individual accounts have tightened, and search filter behavior has been made more restrictive to slow down bulk data collection through Sales Navigator. Browser fingerprinting has become more comprehensive, with over 50 characteristics now being evaluated per visitor.<\/p>\n<p>The techniques that many guides still describe as viable \u2014 running headless crawlers with rotating proxies on a VPS \u2014 break regularly in practice, produce account bans when they fail, and require ongoing maintenance that consumes more time than they save. The developer who documented three consecutive LinkedIn scraper bans before finding a working architecture reported spending 15+ hours per month maintaining scraping infrastructure before switching to a managed service. That maintenance cost is real and often invisible in guides that focus only on the technical setup.<\/p>\n<p>The winning approach in 2026 is smaller in volume, slower in cadence, and more targeted in scope. Intent-based scraping \u2014 collecting post engagement data, event attendees, and job change signals rather than bulk profile exports \u2014 produces higher-quality leads with lower detection risk, because it operates at lower volumes and targets signals that are genuinely meaningful for sales and recruiting.<\/p>\n<p>On the horizon, LinkedIn has begun experimenting with watermarking AI-generated &#8220;About&#8221; sections, which will alter the scraping fingerprint for profiles where the content has been artificially generated. Real-time job change alert tools \u2014 which push notifications when a target prospect changes roles \u2014 are emerging as an alternative to periodic bulk scraping for monitoring lead list changes. First-party data cooperatives, where organizations barter opt-in career data to avoid the scraping risk entirely, represent a longer-term structural shift in how professional data is accessed and shared.<\/p>\n<h2>Conclusion<\/h2>\n<p>LinkedIn scraping in 2026 is not a binary choice between &#8220;do it and risk everything&#8221; and &#8220;don&#8217;t do it at all.&#8221; It is a risk management exercise with three distinct legal dimensions, multiple technical methods with different risk profiles, and a set of safeguards that, applied consistently, can bring that risk down to a manageable level.<\/p>\n<p>The legal picture is clear in its structure if not always in its outcomes. Under U.S. federal criminal law, scraping public data carries low criminal exposure. Under contract law, scraping while logged in violates terms you accepted. Under GDPR and CCPA, collecting personal data creates processing obligations that apply regardless of where you are or how you collected the data. All three layers are real. Ignoring any one of them creates exposure.<\/p>\n<p>The detection picture is equally clear. LinkedIn invests heavily in multi-layer anti-bot systems that analyze behavior, fingerprint browsers, classify IP addresses, and retrain their models multiple times per day. Datacenter proxies, headless browsers without stealth plugins, aggressive request rates, and browser extensions running at scale all fail against these systems. Residential proxies, human-behavior emulation, conservative volume limits, and cloud-based execution are not optional enhancements \u2014 they are the baseline requirements for operating without getting banned.<\/p>\n<p>The method choice follows from your resources and scale. No-code tools like PhantomBuster and Evaboot are the right starting point for most sales teams, recruiters, and marketers. Developer teams with integration requirements should evaluate third-party LinkedIn data APIs like Bright Data or ScrapFly, which handle anti-detection infrastructure and expose structured data through clean API endpoints. Custom scrapers are viable for teams with dedicated engineering resources who are prepared for ongoing maintenance and accept the account risk.<\/p>\n<p>What makes scraping sustainable in 2026 is not technical sophistication alone. It is the combination of public-data focus, human-behavior emulation, GDPR-compliant data handling, and the discipline to treat early warning signals as stop signs rather than obstacles. The organizations that will continue to access LinkedIn data effectively are those that treat it as a responsibility, not just an opportunity.<\/p>\n<h2>Frequently Asked Questions<\/h2>\n<h3><strong>Is LinkedIn scraping illegal?<\/strong><\/h3>\n<p>It is not simply legal or illegal \u2014 it depends on three factors: what law you are asking about, what data is being collected, and how it is being used. Under U.S. federal criminal law (CFAA), scraping publicly accessible data is unlikely to violate the statute under Ninth Circuit precedent. Under civil and contract law, scraping while logged into a LinkedIn account violates the User Agreement and can result in account termination and civil claims. Under GDPR and CCPA, collecting personal data from LinkedIn \u2014 even publicly visible data \u2014 constitutes data processing that requires a lawful basis, retention limits, and deletion workflows.<\/p>\n<h3><strong>Can I scrape LinkedIn without an account?<\/strong><\/h3>\n<p>Yes, for fully public data \u2014 but with significant limitations. Without a session, access is typically capped at around 50 profile views per day per IP before LinkedIn restricts further access. The richest and most useful data requires authentication, which moves scraping into contract-violation territory.<\/p>\n<h3><strong>Will I get banned for using a scraping tool?<\/strong><\/h3>\n<p>The risk of a ban depends on volume, method, and the safety design of the tool being used. Cloud-based tools with built-in rate limiting and residential proxy infrastructure carry significantly lower ban risk than browser extensions or custom scrapers running on datacenter IPs. No scraping method is entirely risk-free \u2014 LinkedIn&#8217;s ToS prohibits all automated data collection \u2014 but the practical risk varies dramatically based on how the scraping is conducted.<\/p>\n<h3><strong>Can I scrape Sales Navigator data?<\/strong><\/h3>\n<p>Sales Navigator is a paid product that requires a logged-in account to access. Scraping it is a contract violation, and LinkedIn actively monitors Sales Navigator for signs of automated access. The data available through Sales Navigator is richer than what is available through standard public profile scraping, which is precisely why LinkedIn monitors it more aggressively. Tools like Evaboot are designed to export Sales Navigator data through the Chrome extension model rather than headless scraping, which reduces the risk level \u2014 but does not eliminate it.<\/p>\n<h3><strong>What is the difference between LinkedIn&#8217;s official API and third-party scrapers?<\/strong><\/h3>\n<p>LinkedIn&#8217;s official API is the only fully authorized way to access LinkedIn data programmatically, but it requires Partner Program approval, takes months to obtain, is priced for enterprise budgets, and is severely restricted in what it permits for data collection purposes. Third-party scrapers and data APIs access LinkedIn data without official authorization, using technical methods to extract public information at scale. They provide the data access that most businesses actually need, but they operate without LinkedIn&#8217;s blessing and carry the ToS risk that comes with that.<\/p>\n<h3><strong>How do I stay GDPR-compliant after scraping?<\/strong><\/h3>\n<p>Document your lawful basis for processing (most commonly legitimate interests under Article 6(1)(f)), minimize collection to only the data fields you actually need, set a proportionate retention limit (the CNIL flagged five years as disproportionate in the KASPR case), implement a deletion workflow for data subject requests, and inform individuals that their data has been collected \u2014 either through proactive notification or through your privacy policy if they make contact with you.<\/p>\n<h3><strong>Can I scrape LinkedIn for free?<\/strong><\/h3>\n<p>Open-source tools and free tiers of commercial tools allow some scraping at zero cost. PhantomBuster&#8217;s free tier allows 30 minutes of execution time per month. Evaboot&#8217;s entry plan starts at $9 per month. Custom scrapers built with Python and Playwright are free to build but require residential proxy infrastructure, which costs money. Attempting to scrape at any meaningful volume for free typically means using free proxies \u2014 which are blacklisted by LinkedIn \u2014 and results in rapid bans.<\/p>\n<h3><strong>What is the safest LinkedIn scraping tool in 2026?<\/strong><\/h3>\n<p>Safety is relative to volume and use case. For most users, a cloud-based no-code tool with built-in residential proxy infrastructure, conservative rate limiting, and no browser extension dependency represents the lowest risk approach. PhantomBuster and Captain Data are consistently rated highly for account safety among the commercial tools. The safest approach of all is to use LinkedIn&#8217;s official API \u2014 but for most organizations, partner program access is not a realistic option.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>LinkedIn is the world&#8217;s largest professional network, holding detailed career data on more than 1 billion members across 200 countries. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":661,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"default","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[54],"tags":[],"class_list":["post-654","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-linkedin-hacks"],"acf":[],"_links":{"self":[{"href":"https:\/\/dealsflow.co\/blog\/wp-json\/wp\/v2\/posts\/654","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dealsflow.co\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dealsflow.co\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dealsflow.co\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dealsflow.co\/blog\/wp-json\/wp\/v2\/comments?post=654"}],"version-history":[{"count":1,"href":"https:\/\/dealsflow.co\/blog\/wp-json\/wp\/v2\/posts\/654\/revisions"}],"predecessor-version":[{"id":660,"href":"https:\/\/dealsflow.co\/blog\/wp-json\/wp\/v2\/posts\/654\/revisions\/660"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dealsflow.co\/blog\/wp-json\/wp\/v2\/media\/661"}],"wp:attachment":[{"href":"https:\/\/dealsflow.co\/blog\/wp-json\/wp\/v2\/media?parent=654"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dealsflow.co\/blog\/wp-json\/wp\/v2\/categories?post=654"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dealsflow.co\/blog\/wp-json\/wp\/v2\/tags?post=654"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}