🚨 Don’t Wait Until It’s Too Late: Why Upgrading from Windows 10 to 11 Should Be a Business Priority

In-Telecom promotional graphic warning businesses about the end of Windows 10 support, featuring a large exclamation mark, a message urging upgrade to Windows 11, and a system notification stating Microsoft will no longer provide support for Windows 10 after October 2025.

In technology, procrastination is a risk multiplier. And one of the biggest risks facing businesses today is running out the clock on Windows 10.

Microsoft ends support for Windows 10 on October 14, 2025. That may feel like a distant date—but in IT years, it’s right around the corner.

Here’s what’s at stake if you wait too long to upgrade to Windows 11—and why forward-thinking organizations are making the move now.


❌ The Hidden Risks of Staying on Windows 10

  1. Security Vulnerabilities Will Multiply
  2. Cyber Insurance May Not Cover* (any incident triggered due to Win10 past EOL date)
  3. Compliance Violations Could Be Triggered* (any incident triggered due to Win10 past EOL date)
  4. Hardware Compatibility Bottlenecks
  5. Software Compatibility Issues Arise as Vendors Increasingly Focus on Windows 11
  6. Unexpected system slowdowns as the industry increasingly pivots to the new Platform ahead of the cutoff date
  7. Operational impacts as your systems and employees become less productive

🕐 So Why Are Businesses Waiting?

The most common reasons we hear:

  • “We don’t know what we can upgrade from a hardware and software dependency standpoint.”
  • “We’re still depreciating older devices.”
  • “We don’t want to disrupt end users.”

Sound familiar? These are solvable problems—but they get harder, you have fewer options, less time to plan, and increasingly more costly, …the longer you wait.


✅ What Smart Organizations Are Doing Now

  1. Inventory and Assessment
  2. Develop a Replacement Plan
  3. Communicate the Change
  4. Leverage Automation

💡 Final Thought: Don’t Let 2025 Sneak Up On You

The most dangerous phrase in IT?

“We’ll deal with it later.”

At In-Telecom, we’re helping organizations across the South plan and execute smooth, secure transitions to Windows 11—with minimal business disruption and maximum ROI.

If you haven’t started your migration plan yet, now is the time. Let’s talk strategy before it becomes an emergency.


🔗 Need help assessing your current Windows 10 environment or building a phased upgrade plan? Call 833-482-8647 or give us your information at https://www.in-telecom.com/get-a-quote/

In-Telecom Named to 2025 CRN® Fast Growth 150 List

Slidell, LA – August 5, 2025 — In-Telecom, a leading provider of managed IT services, cybersecurity, cloud communications, and access control solutions, is proud to announce its inclusion on the 2025 CRN® Fast Growth 150 list. Published by CRN®, a brand of The Channel Company, this prestigious list recognizes the top-performing and fastest-growing technology solution providers across North America.

The Fast Growth 150 list acknowledges companies that have achieved exceptional growth over the past two years through innovation, adaptability, and strategic focus in the ever-evolving IT landscape. Honorees include technology integrators, managed service providers (MSPs), value-added resellers (VARs), and IT consultants that are transforming how businesses operate through emerging technologies like AI, cybersecurity, and cloud computing.

“We are incredibly honored to be recognized on the CRN Fast Growth 150 list,” said Shawn Torres, CEO of In-Telecom. “This achievement is a testament to the dedication of our team and the trust our clients place in us to deliver tailored, future-ready technology solutions. As we continue to expand our offerings and enter new markets, our mission remains clear — to help businesses stay secure, connected, and competitive.”

“Each company on the Fast Growth 150 list is harnessing its extensive technology acumen and forward-looking business strategy to accelerate growth and evolve to stay ahead in the fast-moving IT arena,” said Jennifer Follett, VP, U.S. Content, and Executive Editor, CRN, The Channel Company. “These notable companies show passion and commitment to finding success, supporting agility, and delivering enduring outcomes for customers. We congratulate each of them and look forward to their continued growth and evolution.”

A portion of the 2025 Fast Growth 150 list will be featured in the August issue of CRN Magazine. The complete list is available online at www.crn.com/fastgrowth150 starting August 4, 2025.

About In-Telecom In-Telecom is a full-service technology partner specializing in Managed IT Services, Cybersecurity, VoIP and Cloud Communications, and Access Control Solutions. With a

client-first approach, In-Telecom helps businesses of all sizes across the U.S. improve operational efficiency, secure their environments, and scale with confidence.

Media Contact:

Will Monson, Director of Marketing

In-Telecom

marketing@in-telecom.com

(985) 326-7001

www.in-telecom.com

AI – The New Security Frontier: What Happens to the Information You Share with AI?

In today’s digital age, many of us interact with AI—especially large‑language‑model powered chatbots—for everything from drafting emails to seeking personal advice. As people increasingly “confide” personal details—feelings, health issues, financial troubles, even proprietary business information—the question becomes ever more urgent: what actually happens to that information once entered into an AI system?


1. The Nature of User Input and Its Lifecycle in AI Systems

When you type in a prompt, that data is:

  • Captured by the service provider: Most AI platforms log your prompt and the model’s response, both for quality‑control, analytics, and model‑improvement purposes.
  • Potentially retained and used for training: Unless explicitly opt‑out, your inputs may be incorporated into future model updates or fine‑tuning datasets.
  • Stored in logs: These logs — which may persist indefinitely — can be subpoenaed, breached, or reviewed internally.

A 2021 study highlighted that once sensitive personal information (e.g. healthcare or financial details) enters a conversational AI, it often persists in backend datasets unless actively expunged (Kiplinger).

Stanford’s March 2024 analysis of GenAI and privacy raised similar concerns: even if your input is not used directly to train a model, your entire conversation may reside in logs that could later be accessed or exposed (Stanford HAI).


2. Why People Share Personal Data with AI (and the Risks)

There’s a growing trend—especially among younger users—to treat chatbots like confidants: revealing emotional struggles, relationships, credit card mishaps, or proprietary business issues.

Recent research analyzing over 2.5 million Reddit posts (r/ChatGPT) found users frequently expressed concern about privacy, data persistence, and losing control of shared input over time (The Wall Street Journal, arXiv).

An earlier British survey of 491 respondents confirmed that users worry about data deletion and misuse, and feel powerless once personal info has been shared with AI (arXiv).

Some users believe their conversations are ephemeral—but in many systems they’re not. Even with anonymization, de‑identification can fail: re‑identification techniques can link anonymized data with real identities (e.g., via auxiliary public datasets) (Wikipedia).


3. Data Retention, Logging, and Re‑identification Risks

Logging and retention

AI providers often keep detailed logs to facilitate model improvement, content moderation, and debugging. Unless a firm offers privacy modes or auto-deletion, data may persist indefinitely.

These logs may include not just text prompts, but metadata: timestamps, user identifiers, IP addresses, geographic data, etc. That metadata often dramatically increases re‑identification risk.

De‑identification and its failure modes

De‑identification often involves stripping obvious identifiers, but researchers have repeatedly demonstrated the ease of re‑identification:

  • In health data, Latanya Sweeney famously re‑identified Massachusetts governor’s hospital visit records using zip code, DOB, and gender—despite anonymization (Wikipedia).
  • Anime vs Netflix ratings: researchers matched anonymized Netflix data with IMDb reviews and reached ~68% identity matches from just two ratings and dates (Wikipedia).
  • MRI scans stripped from identifying labels have nonetheless been reconstructed into recognizable faces via AI algorithms (Axios).

Thus, even if conversational AI promises anonymization, combining it with other data sources or model outputs can undo that anonymity.


4. Prompt Injection and Leakage of Other Users’ Input

Security vulnerabilities such as prompt injection can exacerbate risk. A malicious actor could craft inputs that make a model inadvertently reveal private data from other users—a form of data leakage (ScienceDirect, Wikipedia).

OWASP’s Top 10 for LLM applications (2025) ranks prompt injection as the top security risk. This includes both direct injection (user manipulating the behavior of the system) and indirect injection (hidden in documents or websites the AI ingests) (Wikipedia).

If data from previous conversations persists in a model or retrieval system, a prompt injection vulnerability could potentially expose it to another user.


5. Legal, Governance and e‑Discovery Dimensions

Regulatory compliance and data governance

AI systems create complex data governance challenges:

  • In legal and corporate settings, prompts and outputs may become discoverable in litigation. Courts are already grappling with whether AI “conversations” and model‑generated text constitute official documentation (Reuters).
  • For enterprise AI deployments, organizations must revise records retention policies, legal hold procedures, and train users accordingly (Reuters).

Global legal regimes

Privacy laws predate GenAI. The EU’s GDPR or U.S. data protection laws cannot directly address how personal data is used in AI training pipelines—but regulators are catching up.

For instance, Italy temporarily banned ChatGPT over concerns that it violated GDPR by using personal data without appropriate consent (NYU JIPEL). Lawsuits in the U.S. argue that indiscriminate scraping of copyrighted or personal data to train models may violate privacy or IP rights (NYU JIPEL).


6. High‑Profile Analogous Cases: Lessons from Cambridge Analytica

While not directly about AI, the Facebook–Cambridge Analytica scandal is instructive:

  • Data initially collected under one consent (a personality quiz) ended up used without consent for large‑scale profiling of tens of millions of users—some sources estimate up to 87 million Facebook accounts were impacted (Wikipedia).
  • Psychographic targeting and misuse of data illustrate how information given in one context can be re‑purposed in harmful ways (Wikipedia).

Similarly, when people divulge personal or corporate info in AI prompts, it may enter into datasets used for purposes unknown to or unintended by the user.


7. Emerging Research & Defensive Techniques

Privacy‑preserving NLP methods

A 2022 systematic review cataloged over 60 methods for privacy‑preserving NLP (e.g. differential privacy, homomorphic encryption, federated learning) (pmc.ncbi.nlm.nih.gov, arXiv).

In healthcare and high‑sensitivity domains, studies emphasize the need for robust techniques to prevent leakage through model training and inference pipelines (ScienceDirect, pmc.ncbi.nlm.nih.gov).

Adversarial “noise” defenses

Interestingly, machine learning vulnerabilities known as adversarial examples may be used defensively: injecting small noise into data can prevent models from correctly re‑identifying users based on behavior patterns, reducing inference risk (WIRED).

However, as the technique matures, attackers may train models to resist adversarial defenses as well.


8. A Skeptical Lens: What Assumptions Are We Making?

Let’s challenge some common assumptions users tend to have:

Assumption: “AI doesn’t remember me, so it’s safe.”

Challenge: Except in clearly documented privacy modes, providers often store everything. Even ephemeral‑looking UIs may save prompts to server logs unless you’re in anonymous or incognito mode.

Assumption: “My data is de‑identified; it’s anonymous.”

Challenge: De‑identification can be reversed or cross‑referenced. Metadata and auxiliary datasets can re‑identify with disturbing ease (e.g. Latanya Sweeney’s work, Netflix/AOL re‑identification) (Wikipedia, Axios).

Assumption: “What I share can’t harm others.”

Challenge: Prompt injection or data pooling means your data might be exposed to other users—not just your own. Think of it as indirect leakage.

Assumption: “AI outputs are ephemeral, not legal records.”

Challenge: Recent court cases (e.g. Tremblay v. OpenAI, 2024) show that prompts and outputs may be subject to e‑discovery and must be accounted for in corporate legal strategy (Reuters).


9. How to Approach Sharing Information with AI: A Practical Guide

Be selective: Never share Social Security numbers, financial credentials, medical records, corporate secrets, or personal identifiable details—even if anonymized—unless using a vetted, privacy‑guaranteed environment (The Wall Street Journal).

Understand provider practices: Review privacy policies. Use services that allow prompt deletion, data opt‑out, or data anonymization. Some providers offer temporary sessions that don’t record history.

Use privacy‑focused alternatives or privacy modes: Certain tools (e.g. Duck.ai, incognito chat) aim to minimize retention. Consider where and how your input is routed and stored (The Wall Street Journal).

Advocate for strong governance: For corporate or enterprise usage, insist on Privacy Impact Assessments (PIAs) and strong data governance—per guidance from organizations like Osano and regulators pushing for structured AI privacy frameworks (osano.com, NYU JIPEL).

Train users and enforce legal compliance: If your organization deploys AI tools, train staff to avoid disclosing secrets or PII. Update records retention, implement legal‑hold policy extensions, and coordinate with legal/compliance stakeholders (Reuters).


10. Looking Ahead: Policy, Technology, and Trust

We are at an inflection point: current laws were written before GenAI existed. Without updated frameworks, users’ rights and protections are ambiguous.

Julia’s Wired essay on generative AI’s slide into a “data slurry” argues that individual responsibility is insufficient—privacy loss is collective, systemic, and inevitable if not addressed at regulatory level (NYU JIPEL, WIRED).

Regulators in the EU and U.S. are moving—Italy’s temporary ChatGPT ban, EU data protection suits, and U.S. class‑action claims highlight that oversight is catching up (NYU JIPEL).

Technologically, research into federated learning, synthetic data, differential privacy, and adversarial privacy techniques aim to reclaim control over user data—but adoption remains limited and uneven (arXiv, ScienceDirect).


📌 Summary: Key Takeaways

Question Reality / Risk Does AI remember what I share? Often yes—unless you use a privacy‑guaranteed mode. Is my data truly anonymous? Not necessarily—de‑identified data can often be re‑identified. Can others retrieve my data? Potentially, via prompt injection or shared retrieval systems. Can shared data become legal evidence? Yes—models’ logs, prompts, and outputs may be discoverable under legal frameworks. Can AI providers resell, reuse data? Some do use inputs to improve models or analytics; terms vary by provider.


Final Perspectives & Recommendations

  • Question your assumptions. Don’t assume data is auto‑deleted or that “anonymized” means safe.
  • Take a skeptic’s viewpoint. Challenge providers: how long do you retain data, who can access it, and how will it be used?
  • Check your reasoning. If you assume your input is private, but logs exist, that assumption is flawed.
  • Explore alternate angles. Sometimes a locally hosted model, or privacy‑preserving AI solution, may offer safer choices.
  • Demand clarity and governance. Privacy Impact Assessments, user consent, data opt‑outs, and transparency should be non‑negotiable.

Conclusion

As your CIO and CISO, I urge you: yes, AI offers incredible value—but never treat it like another person to whom you can reveal secrets without consequence.

When you “confide” personal or sensitive information—including emotional or proprietary content—you are often entering a system that may log, retain, reuse, or even leak that data—to other users, to legal authorities, or through breach or adversarial exploit.

The frontier of AI security isn’t just adversarial hacking—it’s information governance, user trust, and transparency. Understanding what happens to your data when using AI is no longer optional—it’s essential.

AI – The New Security Frontier: Why Today’s Emails Are Far More Dangerous

AI-driven email threats are redefining cybersecurity – Delano Collins, CIO, explores how advanced phishing attacks and AI-generated scams make today's emails more dangerous than ever.

1. Introduction

Phishing remains a top cyber threat—then came AI. Generative models like GPT-4, Claude, and Gemini have ushered in an alarming evolution in the quality, scale, and effectiveness of phishing campaigns. These attacks are no longer crude scams, with broken English, filled with misspelled words and often lacking context; today, they’re carefully crafted, context-aware social engineering assaults that rival—or even surpass—human-crafted spear phishing in sophistication.

In this post, we specifically:

  1. Break down how generative AI empowers attackers, from ideation to delivery.
  2. Review academic and industry evidence showing AI-enhanced phishing outperforms traditional methods.
  3. Explore novel attack vectors like prompt‑injection targeting AI summarizers.
  4. Evaluate defensive strategies, acknowledging their limitations.
  5. Offer actionable takeaways for defenders in enterprise and MSP contexts.

We aim to question assumptions, scrutinize reasoning, and provide rigor. Let’s dive in.


2. Generative AI as a Phishing Game-Changer

2.1 Flawless, Context‑Aware Messaging

Modern language models produce near-human prose—flawless grammar, tone, and structure. According to Axios, thanks to AI chatbots like GPT, “scam emails are harder to spot and the tells… clunky grammar… utterly useless.” (TechRadar, Wikipedia, Axios)

Critically, attackers may train models on real marketing emails, creating highly credible mimicry:

“They even sound like they are in the voice of who you’re used to working with.” (Axios)

This precision extends to non-English languages, expanding target pools globally. Expect target-rich languages like Icelandic to now be viable due to AI’s linguistic fluency.

2.2 Personalization at Enterprise Scale

AI scrapes profiles from LinkedIn, company pages, public forums, and more—then tailors emails with relevant personal or organizational context. An article in CACM reveals:

“Machine learning algorithms now scour social media… to craft messages that speak directly to the individual, mimicking the style, tone…and context of communications one might expect from trusted contacts.” (Communications of the ACM)

Even unsophisticated attackers can now deploy spear-phishing en masse, rendering traditional volume‑over‑personalization security assumptions obsolete.

2.3 Automated Spear‑Phishing Pipelines

Academic evidence underscores this shift. A November 2024 study (Heiding et al.) compared phishing click-through rates:

  • Human‑crafted spear‑phish: 54%
  • Fully AI‑automated spear‑phish: 54%
  • AI‑generated with human review: 56%
  • Control (generic): 12% (arXiv, Wikipedia, arXiv)

The takeaway: AI alone can replicate human-level effectiveness—even without oversight.

Meanwhile, Hoxhunt research shows AI agents now outperform elite red teams:

  • In 2023, AI was 31% less effective
  • By Nov 2024, only 10% less effective
  • By March 2025, AI surpassed humans by 24% (Hoxhunt)

This suggests AI phishing tools are maturing rapidly, with continuous iterative learning closing the gap—and overtaking—human adversaries.


3. Resilience Against Detection Technologies

3.1 Bypassing Language‑Based Filters

With flawless language, AI content evades many detection systems trained on typos or stylistic anomalies. Cobalt reports:

“60% of recipients fall victim to AI-generated phishing emails, equivalent to rates for non-AI generated emails.” (cobalt.io)

Moreover:

  • 40% of corporate attacks are now initiated via AI.
  • Spammers reduce campaign costs by 95%. (cobalt.io)

These metrics signal alarming efficiency: fewer resources, same (or greater) impact.

3.2 Polymorphic and Dynamic Content

Generative models enable polymorphism: subtly varied versions of the same email, evading signature-based filters. TechRadar corroborates:

“These emails often impersonate executives, integrate into existing threads, and use lookalike domains… bypass traditional security tools… polymorphic tactics.” (TechRadar)

Technically, this means defenses must shift from static signatures to behavior and intent analysis.

3.3 Exploiting AI Summarizers via Prompt-Injection

A new frontier: using hidden HTML/CSS to alter AI-generated summaries in-mail. Multiple security outlets document vulnerabilities in Google Gemini summarization:

  • Attackers embed hidden prompts (white text, zero font size) that manipulate AI summarizers to produce fake alerts—e.g., “Your password was compromised, call support.” (Tom’s Hardware)

This “prompt-injection” exploit targets the AI’s inability to differentiate instruction layers. It is now recognized as a top LLM security risk by OWASP. (Wikipedia)

These emails don’t rely on links or attachments, often evading traditional detectors and exploiting the user’s trust in AI summaries.


4. Quantifying the Risk

4.1 Success Rates & Performance Metrics

  • 42% higher success for AI‑enabled multi‑channel phishing vs. email-only campaigns. (TechMagic)
  • AI‑driven spear phishing emails have a 92% higher success rate than legacy versions. (ZeroThreat)

These combined with the cost savings (95% cheaper campaigns) show AI dramatically amplifies phishing ROI.

4.2 Economic & Operational Implications

Data from IBM states phishing breach costs average $4.88M. (cobalt.io) With AI lowering effort and financial risk for attackers, we should expect both the volume and impact of phishing cyber-attacks to grow.

Microsoft’s research at BlackHat (e.g., LOLCopilot tool) shows corporate AI systems can be twisted to send internal phishing in the victim’s own voice—creating a new layer of insider attack. (The Guardian, WIRED)

4.3 Emerging Attack Mediums: Vishing & Deepfakes

Today’s threats aren’t just emails. Deepfake voice and video clones are becoming viable attack vectors, especially for high-value targets:

“Deepfake audio/video to impersonate real individuals.” (Gallagher)

As costs fall and effectiveness rises, expect multimedia spear-phishing to evolve from novelty to threat.


5. Defense Strategies: What Works—and What Doesn’t

5.1 Beyond Signatures: Semantic & Behavioral AI Defenses

Traditional SAT (Security Awareness Training) is inadequate alone. Hoxhunt research shows behavior-based training reduces susceptibility, even to AI‑driven campaigns. (Hoxhunt)

Detection solutions need to:

  • Analyze semantic intent, sentiment, and unusual context switching.
  • Detect polymorphic delivery and thread injection.

5.2 Addressing Prompt‑Injection Flaws

For AI summarizers:

The responsibility lies with integrators (Google, Microsoft, others) to harden summarization pipelines.

5.3 Organizational Security Hygiene

Defensive playbook must include:

  • Verification culture (“Polite paranoia”)—e.g., confirming requests via alternate channel (phone, chat) (Axios)
  • Enforced MFA / WebAuthn
  • Strong DMARC/DKIM/SPF to prevent spoofing
  • Domain similarity detection for homoglyphs (Wikipedia, Wikipedia)
  • Simulated phishing to continuously test workforce readiness (Wikipedia)

5.4 Investing in AI-Powered Defense

Ironically, AI enables both attacks and defense:

  • LLMs can power phishing intent detectors
  • Transformer-based models with explainability (e.g., LIME) are showing promise (arXiv, cobalt.io, arXiv)
  • Multi-layered machine learning (e.g., RAIDER) reduces feature space while retaining accuracy (arXiv)

Defenders should adopt these intelligently—understanding they are not a magic bullet.


6. Rethinking Cybersecurity Strategies

6.1 Assume AI is Weaponized Adversarially

If your security plans don’t consider AI-enabled threats—multi-modal, faster, cheaper—they’re already outdated.

6.2 Defense-in-Depth, Rigorously Applied

Security needs richer contextual awareness: monitoring unusual thread insertions, semantic anomalies, hidden HTML cues, and behavioral inconsistencies.

6.3 Human-Machine Teaming for Security

Human judgment must remain central. LLMs can detect nuanced threats but need oversight and explainability; humans check edge cases.

6.4 Continuous Vigilance & Training

Dynamic attacks demand an adaptive posture:

  • Regular simulations and training
  • Fresh detection models trained on AI-generated phishing
  • Info-sharing via communities like OWASP, 0din, UK NCSC, etc.

7. Challenging Our Assumptions

  • How current is the data? Much comes from 2024–2025. But AI’s capabilities evolve monthly—our defenses must update continuously.
  • Are click‑through rates enough? They are telling but don’t capture credential theft, lateral movement, or ransomware initiation.
  • Can detection truly scale? Polymorphic phishing still challenges ML models—semantic context detection is hard at scale.

8. Conclusions & Call to Action

  • AI is here—and hackers are using it. It’s not a distant threat—it’s now parallel to elite human adversaries.
  • Volume, quality, and ROI of phishing have skyrocketed.
  • Traditional defense mechanisms are insufficient. We need AI-powered detection combined with human oversight.
  • Prompt injection exposes trust-based vulnerabilities in AI systems.
  • Action steps are urgent: remove blind spots, tighten hygiene practices, fortify human-AI defenses, and maintain vigilance.
  • Defense in Depth: assume email filters, security awareness training, and humans will fail. Implement detective measures to identify WHEN, not IF, phishing emails are successful.

🔧 Quick Takeaways for Technical Teams & CISOs

Area Action Email pipeline Sanitize hidden HTML/CSS; monitor domain similarity Security training Phishing simulations with AI-crafted emails Detection systems Deploy behavioral and semantic models AI systems Use sandboxing/disambiguation to avoid prompt injection Tech culture Encourage verification culture; safe “stop and check” settings


9. Final Thoughts

Phishing has always been a human problem—exploiting trust. With AI, attackers now craft authenticity at scale, blending linguistic sophistication with deep contextualization. Understanding the offensive strategies and success of their tactics helps inform how we must respond. A well-designed defense posture—grounded in technical safeguards, organizational culture, and adaptive learning—can blunt these threats.

In-Telecom Named a 2025 Top Workplace in the Greater New Orleans Area Based on Employee Feedback

Slidell, LA, June 30, 2025 – In-Telecom has been awarded a Top Workplaces 2025 honor by New Orleans Top Workplaces. This recognition is based solely on employee feedback gathered through a third-party survey administered by employee engagement technology partner Energage LLC. The confidential survey uniquely measures the employee experience and its key themes such as feeling Respected & Supported, Enabled to Grow, and Empowered to Execute.

“Earning a Top Workplaces award is a badge of honor for companies, especially because it comes authentically from their employees,” said Eric Rubino, CEO of Energage. “That’s something to be proud of. In today’s market, leaders must ensure they’re allowing employees to have a voice and be heard. That’s paramount. Top Workplaces do this, and it pays dividends.”

Leadership at In-Telecom shared their excitement about the award
“This recognition means the world to us because it comes directly from the people who make In-Telecom successful,” said Shawn and Jimmy, co-owners of In-Telecom. “We’ve always believed that taking care of our employees is just as important as taking care of our clients. We’re proud to foster a workplace where people feel supported, valued, and part of something meaningful.”

Savanna Heller, a current employee, reflected on her experience
“In just six months at In-Telecom, I’ve been trusted with real autonomy to do what I do best. That level of trust and respect isn’t something you find everywhere,” said Heller. “And as someone with severe asthma, the fully paid health insurance has been a game changer. They truly do what is best for the employees.

About In-Telecom
Founded and headquartered in Slidell, Louisiana, In-Telecom is a trusted IT and communications partner providing proactive IT support, cybersecurity, physical security, cloud services, and VoIP solutions to businesses across the nation. With a client-focused approach and deep industry expertise, In-Telecom empowers organizations to streamline operations, enhance security, and maintain business efficiency and continuity. Learn more at www.in-telecom.com.

In-Telecom, Top 500 Managed Service Provider in the Nation, Set to Open an Office in Arlington, TX!

New Arlington Address

Arlington, TX- June 25, 2025 – In-Telecom Texas is excited to announce the expansion of its Texas operations to Arlington, TX, with the new facility scheduled to open in early 2026. This move reflects In-Telecom’s long-term commitment to the Dallas-Fort Worth region and its growing base of business clients.

Following the 2024 acquisition of Lantana Communications—a trusted name in DFW business communications for over three decades—In-Telecom has continued to expand its presence and service offerings in Texas, including VoIP, managed IT, cybersecurity, and physical security solutions.

To commemorate the new location, In-Telecom is partnering with the Arlington Chamber of Commerce to host an official ribbon-cutting ceremony in early 2026. More details will be announced as the grand opening approaches.

“Our office in Arlington represents a powerful combination of growth, service, and community,” said Shawn Torres, CEO of In-Telecom. “We’re excited to strengthen our roots here and continue building relationships that benefit both businesses and the community.”

As a community-focused company, In-Telecom believes in showing up and giving back. This year, the company became both a member and sponsor of the Arlington Chamber of Commerce, actively participating in local events and initiatives. Additionally, In-Telecom is proud to be a full-year sponsor of the Levitt Pavilion Arlington, helping support free live music and cultural events that bring people together throughout the city.

Earlier this year, In-Telecom was recognized by CRN® as one of the 2025 Solution Provider 500, a prestigious honor awarded to the top technology solution providers in North America. This distinction highlights In-Telecom’s commitment to excellence, innovation, and proactive client support.

The new Arlington office will serve as a central hub for:

  • VoIP & Cloud Communications
  • Managed & Co-Managed IT Services
  • Cybersecurity & Compliance Solutions
  • Video Surveillance & Access Control

“We’ve always believed that business success goes hand-in-hand with community involvement,” said Jimmy Burns, COO of In-Telecom. “Arlington has welcomed us, and we’re proud to invest back in the people and organizations that make this city great.”

About In-Telecom

Founded and headquartered in Slidell, Louisiana, In-Telecom is a trusted IT and communications partner providing proactive IT support, cybersecurity, physical security cloud services, and VoIP solutions to businesses across the nation. Following the acquisition of Fort Worth-based Lantana Communications, In-Telecom has expanded its footprint in the Dallas–Fort Worth region—growing its team and enhancing its suite of business technology solutions leading to the new office in Arlington. With a client-focused approach and deep industry expertise, In-Telecom empowers organizations to streamline operations, strengthen security, and ensure business continuity. In 2025, In-Telecom was recognized by CRN® as one of North America’s Top 500 Solution Providers. Backed by decades of experience and a growing Texas team, In-Telecom is committed to keeping businesses secure, connected, and competitive. Learn more at https://www.in-telecom.com/in-telecom-dfw/

Media Contact:
Will Monson
Marketing Director
In-Telecom
wmonson@in-telecom.com

In-Telecom Named to CRN Solution Provider 500 List for 2025  

In-Telecom, a leading provider of business technology solutions, is pleased to announce that CRN®, a brand of The Channel Company, has recognized In-Telecom on the 2025 CRN Solution Provider 500 list.  This annual list recognizes the top solution providers in North America based on revenue, innovation, and commitment to helping customers navigate an evolving IT landscape. 

CRN’s annual Solution Provider 500 list recognizes North America’s largest solution providers by revenue and serves as a prominent benchmark of leading IT services companies. The companies on the list are key influencers propelling growth in the IT industry and the global technology channel. 

“This recognition is a testament to the hard work of our entire team. They should get all the credit for this,” said Jimmy Burns, Co-Owner of In-Telecom. “We’re proud for our team to get national recognition and look forward to continuing to raise the bar for service and innovation in the IT industry. Our team truly is the best, and we are so thankful to have them,” said Shawn Torres, CEO and Co-Owner 

“The Solution Provider 500 list spotlights the technology integrators, managed service providers, value-added resellers and IT consulting firms who bring in the most revenue by leading the way in business and service innovation,” said Jennifer Follett, VP, U.S. Content, and Executive Editor, CRN, The Channel Company. “Recognition is reserved for companies demonstrating an unwavering commitment to business agility and sustained growth through rapidly changing industry needs and technology advancements. Congratulations to each company for earning a well-deserved spot in the Solution Provider 500.” 

 The full Solution Provider 500 list will be available online at www.CRN.com/SP500, and a sampling of the list will be featured in the June issue of CRN Magazine. 

CDK Global Cybersecurity Breach Puts 15,000 Dealerships at Risk

CDK Global, a major provider of dealership management software, suffered a cyberattack that disrupted thousands of U.S. car dealerships’ operations. Since the initial attack, more dealerships have been hit. The attack has impacted more than 15,000 dealership locations across North America.

The attack led to a shutdown of CDK’s IT systems, impacting phones and applications and forcing dealerships to revert to manual processes. While the specifics of the attack are unconfirmed, ransomware is suspected. CDK is working to restore services and advises disconnecting always-on VPNs.

To make matters worse, there have been reports that the attack has moved into a social engineering phase since the initial breach. CDK reports that threat actors have been contacting customers directly and posing as CDK Global members.

While the damage continues to pile up, it’s more important than ever for dealerships and their employees to be vigilant and take extra precautions when receiving emails, messages, or phone calls. Working with a cybersecurity company for dealerships is a logical first step for dealers who haven’t prepared or are unsure of their security plans.

Protecting Dealerships from Cyberattacks

Dealerships should implement the following measures to protect against future cyberattacks:

  1. Regular Backups: Ensure all critical data is backed up and stored securely.
  2. Update Systems: Keep all software and systems updated with the latest security patches.
  3. Employee Training: Conduct regular cybersecurity training to educate employees on recognizing phishing and other cyber threats.
  4. Multi-Factor Authentication (MFA): Implement MFA to add an extra layer of security to sensitive systems.
  5. Incident Response Plan: Develop and regularly update a robust incident response plan to address and mitigate the impact of cyberattacks quickly.

FTC Safeguards Rule Compliance

Dealerships must also adhere to the Federal Trade Commission’s (FTC) Safeguards Rule, which includes the following requirements:

  1. Security Program: Establish a comprehensive security program that protects customer information.
  2. Risk Assessments: Conduct regular risk assessments to identify and address vulnerabilities.
  3. Access Controls: Limit access to customer information to only those employees who need it to perform their duties.
  4. Encryption: Encrypt all customer information, both in transit and at rest, to prevent unauthorized access.
  5. Service Provider Oversight: Ensure service providers can maintain appropriate safeguards for customer information.

By following these steps and complying with FTC requirements, dealerships can enhance their cybersecurity posture and better protect against future threats.

Need help securing your dealership and maintaining compliance? In-Telecom can help! Schedule a consultation today, and let’s discuss your cybersecurity plans.

In-Telecom Announces the Acquisition of Lantana Communications, Expanding into The Heart of Texas

SLIDELL, LA., May 28, 2024 —  In-Telecom Consulting, LLC, a leading telecommunications and managed services company based in Slidell, LA, has acquired Lantana Communications, a Fort Worth, TX, telecommunications and managed services provider. 

The strategic acquisition enhanced In-Telecom’s capabilities by increasing its presence in the Texas market and adding a proven carrier services team to enhance In-Telecom’s current offerings. The company believes Lantana’s 34 years of experience will be monumental to In-Telecom’s continued growth efforts. 

“We’re excited to welcome Lantana to the In-Telecom team. Together, we’ll be able to greatly expand our offerings and become an industry leader across the Gulf South,” said Shawn Torres, CEO of In-Telecom. “We’re overjoyed to introduce a mature managed services offering to Lantana’s existing clients, as well as video surveillance and access control services they haven’t had access to previously.”

This acquisition positions In-Telecom to change the region’s competitive landscape. “The acquisition was a great fit from a core values perspective,” said Jimmy Burns, COO of In-Telecom. “When our clients win, our company wins, and our communities win. The vision and values of Lantana align perfectly with ours, and we’re excited for what the future holds.”

Founded in 2009 as a telecommunications consulting firm, In-Telecom has seen rapid growth across the U.S. As a result of the acquisition, In-Telecom’s clients will have the support of over 130 of the most talented technology professionals, offering 24/7 service and support to more than 1,000 clients across the U.S.

About In-Telecom

In-Telecom is a leading full-service technology provider, providing telecommunications, managed I.T. and cybersecurity services, video surveillance, and access control solutions across the U.S. In-Telecom helps its partners utilize technology, eliminate challenges, improve operations, and scale their businesses. Since its founding, In-Telecom has focused on creating lasting partnerships that benefit clients, companies, and communities.

For more information, visit in-telecom.com.

About Lantana Communications

Lantana Communications is a telecommunications provider providing telecom, call center solutions, and carrier services in the South Central U.S. Since its founding in 1990, Lantana has focused on creating lasting relationships in the community, helping customers see telecom and I.T. as a service, not a problem.

For more information, visit lantanacom.com.

Employee Spotlight: James Estopinal

Meet James Estopinal, the Director of Voice at In-Telecom. James plays an essential role in developing and maintaining our telecommunications solutions and has committed himself to innovation, delivering the best possible product to our clients.

James started his career with a passion for learning and earned certifications on telecom platforms like Nortel, NEC, Mitel, Avaya, and Broadsoft. Since then, he’s used his expertise to develop ITC Cloud and identify new opportunities to provide better communication options.

Continue reading